Mar 13 20:27:18 crc systemd[1]: Starting Kubernetes Kubelet... Mar 13 20:27:18 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:18 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 13 20:27:19 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 13 20:27:20 crc kubenswrapper[5029]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 20:27:20 crc kubenswrapper[5029]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 20:27:20 crc kubenswrapper[5029]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 20:27:20 crc kubenswrapper[5029]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 20:27:20 crc kubenswrapper[5029]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 20:27:20 crc kubenswrapper[5029]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.351471 5029 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354643 5029 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354661 5029 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354665 5029 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354671 5029 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354681 5029 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354684 5029 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354690 5029 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354693 5029 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354698 5029 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354703 5029 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354707 5029 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354712 5029 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354717 5029 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354722 5029 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354727 5029 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354732 5029 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354739 5029 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354743 5029 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354747 5029 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354751 5029 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354758 5029 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354764 5029 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354768 5029 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354774 5029 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354779 5029 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354784 5029 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354787 5029 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354791 5029 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354795 5029 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354802 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354806 5029 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354809 5029 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354813 5029 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354818 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354821 5029 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354825 5029 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354830 5029 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354834 5029 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354839 5029 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354844 5029 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354862 5029 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354872 5029 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354877 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354882 5029 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354886 5029 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354891 5029 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354895 5029 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354899 5029 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354904 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354910 5029 feature_gate.go:330] unrecognized feature gate: Example Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354914 5029 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354919 5029 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354927 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354931 5029 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354935 5029 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354939 5029 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354944 5029 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354949 5029 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354953 5029 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354957 5029 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354962 5029 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354966 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354970 5029 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354980 5029 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354985 5029 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354989 5029 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354992 5029 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.354997 5029 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.355000 5029 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.355004 5029 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.355007 5029 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356276 5029 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356319 5029 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356331 5029 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356337 5029 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356349 5029 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356357 5029 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356365 5029 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356371 5029 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356377 5029 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356382 5029 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356388 5029 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356395 5029 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356400 5029 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356409 5029 flags.go:64] FLAG: --cgroup-root="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356414 5029 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356420 5029 flags.go:64] FLAG: --client-ca-file="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356426 5029 flags.go:64] FLAG: --cloud-config="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356431 5029 flags.go:64] FLAG: --cloud-provider="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356436 5029 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356442 5029 flags.go:64] FLAG: --cluster-domain="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356448 5029 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356455 5029 flags.go:64] FLAG: --config-dir="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356463 5029 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356468 5029 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356474 5029 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356478 5029 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356482 5029 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356487 5029 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356492 5029 flags.go:64] FLAG: --contention-profiling="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356497 5029 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356505 5029 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356510 5029 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356516 5029 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356541 5029 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356547 5029 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356553 5029 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356559 5029 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356565 5029 flags.go:64] FLAG: --enable-server="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356574 5029 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356581 5029 flags.go:64] FLAG: --event-burst="100" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356586 5029 flags.go:64] FLAG: --event-qps="50" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356590 5029 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356595 5029 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356599 5029 flags.go:64] FLAG: --eviction-hard="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356607 5029 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356612 5029 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356616 5029 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356624 5029 flags.go:64] FLAG: --eviction-soft="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356630 5029 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356635 5029 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356641 5029 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356647 5029 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356652 5029 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356658 5029 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356663 5029 flags.go:64] FLAG: --feature-gates="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356674 5029 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356679 5029 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356686 5029 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356702 5029 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356707 5029 flags.go:64] FLAG: --healthz-port="10248" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356711 5029 flags.go:64] FLAG: --help="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356716 5029 flags.go:64] FLAG: --hostname-override="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356722 5029 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356727 5029 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356737 5029 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356741 5029 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356746 5029 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356751 5029 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356756 5029 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356762 5029 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356767 5029 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356772 5029 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356781 5029 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356787 5029 flags.go:64] FLAG: --kube-reserved="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356792 5029 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356798 5029 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356803 5029 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356808 5029 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356813 5029 flags.go:64] FLAG: --lock-file="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356817 5029 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356821 5029 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356829 5029 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356838 5029 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356843 5029 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356870 5029 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356876 5029 flags.go:64] FLAG: --logging-format="text" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356881 5029 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356887 5029 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356892 5029 flags.go:64] FLAG: --manifest-url="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356901 5029 flags.go:64] FLAG: --manifest-url-header="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356931 5029 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356937 5029 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356947 5029 flags.go:64] FLAG: --max-pods="110" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356953 5029 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356959 5029 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356965 5029 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356970 5029 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356976 5029 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.356985 5029 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357024 5029 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357287 5029 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357302 5029 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357314 5029 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357324 5029 flags.go:64] FLAG: --pod-cidr="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357335 5029 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357347 5029 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357356 5029 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357366 5029 flags.go:64] FLAG: --pods-per-core="0" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357375 5029 flags.go:64] FLAG: --port="10250" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357384 5029 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357394 5029 flags.go:64] FLAG: --provider-id="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357404 5029 flags.go:64] FLAG: --qos-reserved="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357413 5029 flags.go:64] FLAG: --read-only-port="10255" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357423 5029 flags.go:64] FLAG: --register-node="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357432 5029 flags.go:64] FLAG: --register-schedulable="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357441 5029 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357456 5029 flags.go:64] FLAG: --registry-burst="10" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357465 5029 flags.go:64] FLAG: --registry-qps="5" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357473 5029 flags.go:64] FLAG: --reserved-cpus="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357483 5029 flags.go:64] FLAG: --reserved-memory="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357494 5029 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357503 5029 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357512 5029 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357521 5029 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357530 5029 flags.go:64] FLAG: --runonce="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357539 5029 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357548 5029 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357558 5029 flags.go:64] FLAG: --seccomp-default="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357567 5029 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357575 5029 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357586 5029 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357643 5029 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357653 5029 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357662 5029 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357671 5029 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357680 5029 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357689 5029 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357699 5029 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357709 5029 flags.go:64] FLAG: --system-cgroups="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357718 5029 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357733 5029 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357742 5029 flags.go:64] FLAG: --tls-cert-file="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357751 5029 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357764 5029 flags.go:64] FLAG: --tls-min-version="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357772 5029 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357782 5029 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357790 5029 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357800 5029 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357809 5029 flags.go:64] FLAG: --v="2" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357821 5029 flags.go:64] FLAG: --version="false" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357832 5029 flags.go:64] FLAG: --vmodule="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357843 5029 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.357879 5029 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358075 5029 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358086 5029 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358095 5029 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358104 5029 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358112 5029 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358121 5029 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358132 5029 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358141 5029 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358151 5029 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358159 5029 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358169 5029 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358177 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358186 5029 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358194 5029 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358202 5029 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358210 5029 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358218 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358226 5029 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358234 5029 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358242 5029 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358250 5029 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358257 5029 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358265 5029 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358272 5029 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358283 5029 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358293 5029 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358302 5029 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358312 5029 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358322 5029 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358333 5029 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358341 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358350 5029 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358359 5029 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358369 5029 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358377 5029 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358385 5029 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358393 5029 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358401 5029 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358410 5029 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358420 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358428 5029 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358436 5029 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358444 5029 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358452 5029 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358460 5029 feature_gate.go:330] unrecognized feature gate: Example Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358468 5029 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358475 5029 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358483 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358490 5029 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358498 5029 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358506 5029 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358513 5029 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358520 5029 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358528 5029 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358536 5029 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358544 5029 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358551 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358559 5029 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358567 5029 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358574 5029 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358582 5029 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358590 5029 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358597 5029 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358608 5029 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358618 5029 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358627 5029 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358635 5029 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358643 5029 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358651 5029 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358658 5029 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.358667 5029 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.358693 5029 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.367024 5029 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.367074 5029 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367165 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367180 5029 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367186 5029 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367192 5029 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367198 5029 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367203 5029 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367208 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367213 5029 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367219 5029 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367224 5029 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367229 5029 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367234 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367238 5029 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367243 5029 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367248 5029 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367253 5029 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367257 5029 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367261 5029 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367266 5029 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367271 5029 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367277 5029 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367282 5029 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367287 5029 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367292 5029 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367298 5029 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367305 5029 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367310 5029 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367315 5029 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367320 5029 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367325 5029 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367330 5029 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367334 5029 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367339 5029 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367344 5029 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367357 5029 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367361 5029 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367366 5029 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367371 5029 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367377 5029 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367382 5029 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367388 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367392 5029 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367396 5029 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367401 5029 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367406 5029 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367411 5029 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367415 5029 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367420 5029 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367424 5029 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367428 5029 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367433 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367437 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367441 5029 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367446 5029 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367450 5029 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367455 5029 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367463 5029 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367469 5029 feature_gate.go:330] unrecognized feature gate: Example Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367473 5029 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367477 5029 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367481 5029 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367485 5029 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367489 5029 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367493 5029 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367497 5029 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367501 5029 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367504 5029 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367508 5029 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367512 5029 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367518 5029 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367531 5029 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.367539 5029 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367677 5029 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367687 5029 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367691 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367696 5029 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367701 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367706 5029 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367711 5029 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367716 5029 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367721 5029 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367725 5029 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367730 5029 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367734 5029 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367738 5029 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367744 5029 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367751 5029 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367756 5029 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367761 5029 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367767 5029 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367772 5029 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367776 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367780 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367784 5029 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367788 5029 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367793 5029 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367798 5029 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367803 5029 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367807 5029 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367811 5029 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367815 5029 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367819 5029 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367823 5029 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367827 5029 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367831 5029 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367835 5029 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367862 5029 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367867 5029 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367871 5029 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367875 5029 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367879 5029 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367883 5029 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367887 5029 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367891 5029 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367895 5029 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367899 5029 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367903 5029 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367907 5029 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367911 5029 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367915 5029 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367920 5029 feature_gate.go:330] unrecognized feature gate: Example Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367925 5029 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367930 5029 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367934 5029 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367938 5029 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367941 5029 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367945 5029 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367950 5029 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367954 5029 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367958 5029 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367962 5029 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367966 5029 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367970 5029 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367974 5029 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367978 5029 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367982 5029 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367986 5029 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367990 5029 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.367996 5029 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.368001 5029 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.368005 5029 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.368010 5029 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.368021 5029 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.368028 5029 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.369055 5029 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.373087 5029 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.381168 5029 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.381331 5029 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.383390 5029 server.go:997] "Starting client certificate rotation" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.383421 5029 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.384498 5029 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.406654 5029 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.408717 5029 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.412692 5029 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.432179 5029 log.go:25] "Validated CRI v1 runtime API" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.465364 5029 log.go:25] "Validated CRI v1 image API" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.468585 5029 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.476398 5029 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-13-20-23-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.476450 5029 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.498414 5029 manager.go:217] Machine: {Timestamp:2026-03-13 20:27:20.495500539 +0000 UTC m=+0.511582962 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf BootID:044ba3ae-5433-4825-be63-55a4dd605347 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:76:20:c3 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:76:20:c3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6b:0a:64 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9a:21:61 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7e:08:62 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e5:2b:f9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9a:2b:7b:02:19:82 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:3b:95:be:75:40 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.498645 5029 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.498954 5029 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.499374 5029 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.499555 5029 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.499600 5029 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.499831 5029 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.499844 5029 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.500597 5029 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.500641 5029 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.501919 5029 state_mem.go:36] "Initialized new in-memory state store" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.502064 5029 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.506244 5029 kubelet.go:418] "Attempting to sync node with API server" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.506276 5029 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.506307 5029 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.506327 5029 kubelet.go:324] "Adding apiserver pod source" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.506342 5029 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.510826 5029 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.513028 5029 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.515440 5029 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.516284 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.516414 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.516531 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.516660 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517636 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517665 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517673 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517680 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517692 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517702 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517711 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517723 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517734 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517744 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517755 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.517763 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.520026 5029 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.520584 5029 server.go:1280] "Started kubelet" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.521471 5029 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.522515 5029 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 20:27:20 crc systemd[1]: Started Kubernetes Kubelet. Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.522689 5029 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.523452 5029 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.523503 5029 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.525726 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.523770 5029 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.529779 5029 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.523403 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.532165 5029 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.535034 5029 server.go:460] "Adding debug handlers to kubelet server" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.535482 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="200ms" Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.535934 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.536152 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.544259 5029 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.544309 5029 factory.go:55] Registering systemd factory Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.544333 5029 factory.go:221] Registration of the systemd container factory successfully Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.544959 5029 factory.go:153] Registering CRI-O factory Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.545054 5029 factory.go:221] Registration of the crio container factory successfully Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.545087 5029 factory.go:103] Registering Raw factory Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.545105 5029 manager.go:1196] Started watching for new ooms in manager Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546305 5029 manager.go:319] Starting recovery of all containers Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546479 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546622 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546653 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546683 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546710 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546735 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546779 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546807 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546839 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546900 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546929 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546960 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.546985 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547017 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547043 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547111 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547138 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547167 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547192 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547219 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547248 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547275 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547302 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547333 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547390 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547417 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547456 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547486 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547517 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547544 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547574 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547600 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547629 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547658 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547684 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547710 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547734 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547760 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547791 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547825 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547889 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547924 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547953 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.547982 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548013 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548104 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548137 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548164 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548195 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548224 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548252 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548284 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548325 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548355 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548390 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548426 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548492 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548519 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548546 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548576 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548605 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548632 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548661 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548692 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548722 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548766 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548804 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548839 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548939 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.548970 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549001 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549035 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549064 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549093 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.546188 5029 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c807ffff4532d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,LastTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549135 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549227 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549285 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549312 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549341 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549365 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549607 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549625 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549642 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549667 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549702 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549728 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549757 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549779 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549802 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549824 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549847 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.549899 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550016 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550084 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550112 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550134 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550156 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550182 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550204 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550228 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550250 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550274 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550294 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550315 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550352 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550380 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550410 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550434 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550462 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550489 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550512 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550534 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550558 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550583 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550603 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550626 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550647 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550673 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550695 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550717 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550740 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550761 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550781 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550803 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550828 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550876 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550900 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550922 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550945 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550964 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.550988 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551008 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551029 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551053 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551073 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551095 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551123 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551145 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551169 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551189 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551215 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551247 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551271 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551294 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551318 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551341 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551362 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551389 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.551412 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556041 5029 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556121 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556152 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556176 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556198 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556225 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556257 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556298 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556320 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556341 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556361 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556384 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556403 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556442 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556463 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556482 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556504 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556526 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556547 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556570 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556590 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556610 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556632 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556654 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556675 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556703 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556729 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556760 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556781 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556805 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556825 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556846 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556943 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.556999 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557019 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557041 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557061 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557084 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557104 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557130 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557152 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557171 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557195 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557218 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557237 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557257 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557277 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557299 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557318 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557339 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557363 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557385 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557406 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557457 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557479 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557502 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557521 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557541 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557560 5029 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557583 5029 reconstruct.go:97] "Volume reconstruction finished" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.557597 5029 reconciler.go:26] "Reconciler: start to sync state" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.563595 5029 manager.go:324] Recovery completed Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.574662 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.576676 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.576729 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.576744 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.577596 5029 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.577612 5029 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.577635 5029 state_mem.go:36] "Initialized new in-memory state store" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.593135 5029 policy_none.go:49] "None policy: Start" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.593138 5029 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.594149 5029 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.594209 5029 state_mem.go:35] "Initializing new in-memory state store" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.598055 5029 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.598100 5029 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.598128 5029 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.598181 5029 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 20:27:20 crc kubenswrapper[5029]: W0313 20:27:20.599382 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.599475 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.630161 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.644449 5029 manager.go:334] "Starting Device Plugin manager" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.644515 5029 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.644532 5029 server.go:79] "Starting device plugin registration server" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.646646 5029 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.646886 5029 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.647179 5029 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.647335 5029 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.647347 5029 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.658285 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.698465 5029 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.698584 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.699746 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.699791 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.699804 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.699988 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.700216 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.700259 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701048 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701076 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701030 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701108 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701121 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701085 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701257 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701501 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701527 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701786 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701812 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701820 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.701921 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.702589 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.702834 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.702871 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.702881 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.702957 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.702975 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.702995 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703012 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703072 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703095 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703332 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703523 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703547 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703556 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703777 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703809 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703888 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703928 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.703941 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.705150 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.705183 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.705195 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.705677 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.705721 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.705740 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.736555 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="400ms" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.747726 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.749062 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.749129 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.749142 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.749179 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.749810 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759363 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759405 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759423 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759442 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759458 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759548 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759617 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759642 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759663 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759698 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759717 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759739 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759835 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759931 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.759973 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861594 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861718 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861756 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861786 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861825 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861884 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861918 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861954 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861938 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861996 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862006 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862030 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.861886 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862074 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862081 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862145 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862181 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862251 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862282 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862337 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862352 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862391 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862455 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862460 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862491 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862517 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862545 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862559 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862608 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.862761 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.950230 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.952103 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.952155 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.952165 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:20 crc kubenswrapper[5029]: I0313 20:27:20.952195 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:20 crc kubenswrapper[5029]: E0313 20:27:20.952901 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.036676 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.058149 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.075393 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 13 20:27:21 crc kubenswrapper[5029]: W0313 20:27:21.082992 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c5b5421f3ff37a961fad5b6bfcfbc1c53f52b795495d0ad0004fb5a0c81e2de5 WatchSource:0}: Error finding container c5b5421f3ff37a961fad5b6bfcfbc1c53f52b795495d0ad0004fb5a0c81e2de5: Status 404 returned error can't find the container with id c5b5421f3ff37a961fad5b6bfcfbc1c53f52b795495d0ad0004fb5a0c81e2de5 Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.087153 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.092563 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:21 crc kubenswrapper[5029]: W0313 20:27:21.104430 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-363f4b9dd841399baa10893a1e170a52f4471e34f79ce13986a87723132c64e0 WatchSource:0}: Error finding container 363f4b9dd841399baa10893a1e170a52f4471e34f79ce13986a87723132c64e0: Status 404 returned error can't find the container with id 363f4b9dd841399baa10893a1e170a52f4471e34f79ce13986a87723132c64e0 Mar 13 20:27:21 crc kubenswrapper[5029]: W0313 20:27:21.105511 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-02e8d47aae95c5b60d1b52641173b370e8eabceddff35e4bb73efe7202f18bd1 WatchSource:0}: Error finding container 02e8d47aae95c5b60d1b52641173b370e8eabceddff35e4bb73efe7202f18bd1: Status 404 returned error can't find the container with id 02e8d47aae95c5b60d1b52641173b370e8eabceddff35e4bb73efe7202f18bd1 Mar 13 20:27:21 crc kubenswrapper[5029]: W0313 20:27:21.108670 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-151a312e652f6cdd8718c001e035b3ec485ee79802ed1cfcd42fdbed6995f777 WatchSource:0}: Error finding container 151a312e652f6cdd8718c001e035b3ec485ee79802ed1cfcd42fdbed6995f777: Status 404 returned error can't find the container with id 151a312e652f6cdd8718c001e035b3ec485ee79802ed1cfcd42fdbed6995f777 Mar 13 20:27:21 crc kubenswrapper[5029]: E0313 20:27:21.138036 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="800ms" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.353630 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.355431 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.355483 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.355493 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.355521 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:21 crc kubenswrapper[5029]: E0313 20:27:21.356093 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Mar 13 20:27:21 crc kubenswrapper[5029]: W0313 20:27:21.407495 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:21 crc kubenswrapper[5029]: E0313 20:27:21.407579 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:21 crc kubenswrapper[5029]: W0313 20:27:21.501109 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:21 crc kubenswrapper[5029]: E0313 20:27:21.501213 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.530838 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.606442 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"151a312e652f6cdd8718c001e035b3ec485ee79802ed1cfcd42fdbed6995f777"} Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.607892 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"02e8d47aae95c5b60d1b52641173b370e8eabceddff35e4bb73efe7202f18bd1"} Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.608955 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"363f4b9dd841399baa10893a1e170a52f4471e34f79ce13986a87723132c64e0"} Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.610610 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c5b5421f3ff37a961fad5b6bfcfbc1c53f52b795495d0ad0004fb5a0c81e2de5"} Mar 13 20:27:21 crc kubenswrapper[5029]: I0313 20:27:21.612088 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"39866b89685b500cc1c1b13f3b2174ff8df82061fb13dbf9d42e0373c1a104b8"} Mar 13 20:27:21 crc kubenswrapper[5029]: W0313 20:27:21.897804 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:21 crc kubenswrapper[5029]: E0313 20:27:21.898021 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:21 crc kubenswrapper[5029]: W0313 20:27:21.921682 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:21 crc kubenswrapper[5029]: E0313 20:27:21.921778 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:21 crc kubenswrapper[5029]: E0313 20:27:21.938838 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="1.6s" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.156215 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.157532 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.157567 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.157576 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.157600 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:22 crc kubenswrapper[5029]: E0313 20:27:22.158038 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.464671 5029 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 20:27:22 crc kubenswrapper[5029]: E0313 20:27:22.465739 5029 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.531287 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.615998 5029 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2" exitCode=0 Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.616071 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2"} Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.616191 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.617266 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.617312 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.617327 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.619389 5029 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f94fc6f41ee63eae61c2f511d14cdf3806e5def2e55502466375b9a657e8b7a6" exitCode=0 Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.619558 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f94fc6f41ee63eae61c2f511d14cdf3806e5def2e55502466375b9a657e8b7a6"} Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.619903 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.623256 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.623332 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.623353 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.627329 5029 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd" exitCode=0 Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.627435 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd"} Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.627463 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.631605 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.631663 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.631688 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.633940 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998"} Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.634011 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680"} Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.634036 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55"} Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.634067 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7"} Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.634038 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.635701 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.635762 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.635787 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.637154 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd" exitCode=0 Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.637200 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd"} Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.637653 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.642167 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.642208 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.642220 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.648586 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.652088 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.652132 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.652155 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:22 crc kubenswrapper[5029]: I0313 20:27:22.786520 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:23 crc kubenswrapper[5029]: W0313 20:27:23.405144 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:23 crc kubenswrapper[5029]: E0313 20:27:23.405819 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.530830 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Mar 13 20:27:23 crc kubenswrapper[5029]: E0313 20:27:23.540214 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="3.2s" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.643322 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.643392 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.643413 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.643432 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.646330 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.646397 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.646422 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.646399 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.650776 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.650895 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.650913 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.654471 5029 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9d2ce1a7663e55379a9ae7620967204eaec693070b7744373b2f54a8488e96cd" exitCode=0 Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.654571 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9d2ce1a7663e55379a9ae7620967204eaec693070b7744373b2f54a8488e96cd"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.654843 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.656454 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.656510 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.656524 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.656926 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.656950 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.657188 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9c4411d537505129b70428e21f20cf412ef5dd3003f7bd7b09a5b97fc5622809"} Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.658923 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.658947 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.658957 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.658962 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.658993 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.659012 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.759042 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.760401 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.760448 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.760474 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:23 crc kubenswrapper[5029]: I0313 20:27:23.760511 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:23 crc kubenswrapper[5029]: E0313 20:27:23.761155 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.315995 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.323536 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.662743 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2383c531eb7993355b35bb03b36bd5e6a01b46f046df61cb4caf91e056be963e"} Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.662859 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.664017 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.664071 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.664093 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.666445 5029 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="04053f7bf280a918b270eb2b1be6988ff69f88c293c3b601d1740509c1f552c4" exitCode=0 Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.666533 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"04053f7bf280a918b270eb2b1be6988ff69f88c293c3b601d1740509c1f552c4"} Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.666587 5029 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.666621 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.666630 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.666691 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.666634 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668201 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668173 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668266 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668278 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668224 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668345 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668361 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668380 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668243 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668382 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668402 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:24 crc kubenswrapper[5029]: I0313 20:27:24.668419 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.161375 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.480842 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.677339 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8f67cf81d042bc7f29637cd8043d414a9c2b413f36602cad57caaa402663e102"} Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.677409 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"041488a41735862f6541de001124dbf962d63581b6ada9ab9f22e5f8ed726cea"} Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.677429 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f31cb1a90ed9fa5f8ad95d32a2324a69d255b9cd27c9be511de9e0212d19c6bf"} Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.677443 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5573add004a32067fde0ecfc2a9f880cf5b91d05d1304622eb26a7d36dbda4d7"} Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.677456 5029 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.677542 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.677457 5029 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.677676 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.678758 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.678785 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.678796 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.679294 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.679319 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:25 crc kubenswrapper[5029]: I0313 20:27:25.679329 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.596056 5029 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.686023 5029 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.686076 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.686062 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7089652396a5ebeda4285f2c39061bdc42021b133f87381666ea6cfe8536713f"} Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.686245 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.687032 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.687064 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.687076 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.687787 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.687819 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.687828 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.923685 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.962150 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.963765 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.963813 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.963825 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:26 crc kubenswrapper[5029]: I0313 20:27:26.963879 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:27 crc kubenswrapper[5029]: I0313 20:27:27.688768 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:27 crc kubenswrapper[5029]: I0313 20:27:27.688932 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:27 crc kubenswrapper[5029]: I0313 20:27:27.689973 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:27 crc kubenswrapper[5029]: I0313 20:27:27.690016 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:27 crc kubenswrapper[5029]: I0313 20:27:27.690028 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:27 crc kubenswrapper[5029]: I0313 20:27:27.690341 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:27 crc kubenswrapper[5029]: I0313 20:27:27.690397 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:27 crc kubenswrapper[5029]: I0313 20:27:27.690416 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.057421 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.057594 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.058597 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.058627 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.058636 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.479053 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.479249 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.480262 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.480299 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.480314 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.605598 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.606093 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.607260 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.607311 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.607329 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.913701 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.914006 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.915336 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.915376 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:29 crc kubenswrapper[5029]: I0313 20:27:29.915385 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:30 crc kubenswrapper[5029]: E0313 20:27:30.658448 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.203133 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.203361 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.205227 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.205312 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.205339 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.207499 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.699596 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.701004 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.701138 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:31 crc kubenswrapper[5029]: I0313 20:27:31.701218 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.203538 5029 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.203620 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 20:27:34 crc kubenswrapper[5029]: W0313 20:27:34.269166 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.269258 5029 trace.go:236] Trace[855404307]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Mar-2026 20:27:24.267) (total time: 10001ms): Mar 13 20:27:34 crc kubenswrapper[5029]: Trace[855404307]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:27:34.269) Mar 13 20:27:34 crc kubenswrapper[5029]: Trace[855404307]: [10.001761451s] [10.001761451s] END Mar 13 20:27:34 crc kubenswrapper[5029]: E0313 20:27:34.269282 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 13 20:27:34 crc kubenswrapper[5029]: W0313 20:27:34.279802 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.279961 5029 trace.go:236] Trace[1779247754]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Mar-2026 20:27:24.278) (total time: 10001ms): Mar 13 20:27:34 crc kubenswrapper[5029]: Trace[1779247754]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:27:34.279) Mar 13 20:27:34 crc kubenswrapper[5029]: Trace[1779247754]: [10.001520477s] [10.001520477s] END Mar 13 20:27:34 crc kubenswrapper[5029]: E0313 20:27:34.279989 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.532914 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.712384 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.714108 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2383c531eb7993355b35bb03b36bd5e6a01b46f046df61cb4caf91e056be963e" exitCode=255 Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.714151 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2383c531eb7993355b35bb03b36bd5e6a01b46f046df61cb4caf91e056be963e"} Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.714330 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.715470 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.715508 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.715517 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.716068 5029 scope.go:117] "RemoveContainer" containerID="2383c531eb7993355b35bb03b36bd5e6a01b46f046df61cb4caf91e056be963e" Mar 13 20:27:34 crc kubenswrapper[5029]: W0313 20:27:34.782502 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:34Z is after 2026-02-23T05:33:13Z Mar 13 20:27:34 crc kubenswrapper[5029]: E0313 20:27:34.782497 5029 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:34Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c807ffff4532d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,LastTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:27:34 crc kubenswrapper[5029]: E0313 20:27:34.782597 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:34Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:34 crc kubenswrapper[5029]: W0313 20:27:34.797788 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:34Z is after 2026-02-23T05:33:13Z Mar 13 20:27:34 crc kubenswrapper[5029]: E0313 20:27:34.798065 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:34Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.798906 5029 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.798990 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 13 20:27:34 crc kubenswrapper[5029]: E0313 20:27:34.799163 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:34Z is after 2026-02-23T05:33:13Z" node="crc" Mar 13 20:27:34 crc kubenswrapper[5029]: E0313 20:27:34.799979 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:34Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 13 20:27:34 crc kubenswrapper[5029]: E0313 20:27:34.800810 5029 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:34Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.805235 5029 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 13 20:27:34 crc kubenswrapper[5029]: I0313 20:27:34.805294 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.169167 5029 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]log ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]etcd ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/priority-and-fairness-filter ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-apiextensions-informers ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-apiextensions-controllers ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/crd-informer-synced ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-system-namespaces-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 13 20:27:35 crc kubenswrapper[5029]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 13 20:27:35 crc kubenswrapper[5029]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/bootstrap-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/start-kube-aggregator-informers ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/apiservice-registration-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/apiservice-discovery-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]autoregister-completion ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/apiservice-openapi-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 13 20:27:35 crc kubenswrapper[5029]: livez check failed Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.169280 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.534583 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:35Z is after 2026-02-23T05:33:13Z Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.718136 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.718599 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.729095 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e" exitCode=255 Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.729148 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e"} Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.729201 5029 scope.go:117] "RemoveContainer" containerID="2383c531eb7993355b35bb03b36bd5e6a01b46f046df61cb4caf91e056be963e" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.729440 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.730506 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.730540 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.730550 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:35 crc kubenswrapper[5029]: I0313 20:27:35.731790 5029 scope.go:117] "RemoveContainer" containerID="faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e" Mar 13 20:27:35 crc kubenswrapper[5029]: E0313 20:27:35.732042 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:27:36 crc kubenswrapper[5029]: I0313 20:27:36.536580 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:36Z is after 2026-02-23T05:33:13Z Mar 13 20:27:36 crc kubenswrapper[5029]: I0313 20:27:36.732992 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 13 20:27:36 crc kubenswrapper[5029]: I0313 20:27:36.924208 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:36 crc kubenswrapper[5029]: I0313 20:27:36.924397 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:36 crc kubenswrapper[5029]: I0313 20:27:36.925710 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:36 crc kubenswrapper[5029]: I0313 20:27:36.925764 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:36 crc kubenswrapper[5029]: I0313 20:27:36.925779 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:36 crc kubenswrapper[5029]: I0313 20:27:36.926542 5029 scope.go:117] "RemoveContainer" containerID="faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e" Mar 13 20:27:36 crc kubenswrapper[5029]: E0313 20:27:36.926756 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:27:37 crc kubenswrapper[5029]: I0313 20:27:37.533501 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:37Z is after 2026-02-23T05:33:13Z Mar 13 20:27:38 crc kubenswrapper[5029]: I0313 20:27:38.535724 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:38Z is after 2026-02-23T05:33:13Z Mar 13 20:27:38 crc kubenswrapper[5029]: W0313 20:27:38.788466 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:38Z is after 2026-02-23T05:33:13Z Mar 13 20:27:38 crc kubenswrapper[5029]: E0313 20:27:38.788542 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:38Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:38 crc kubenswrapper[5029]: W0313 20:27:38.895139 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:38Z is after 2026-02-23T05:33:13Z Mar 13 20:27:38 crc kubenswrapper[5029]: E0313 20:27:38.895278 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:38Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:39 crc kubenswrapper[5029]: W0313 20:27:39.074536 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:39Z is after 2026-02-23T05:33:13Z Mar 13 20:27:39 crc kubenswrapper[5029]: E0313 20:27:39.074607 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.533297 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:39Z is after 2026-02-23T05:33:13Z Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.954927 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.955205 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.955555 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.961212 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.962784 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.962845 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.962887 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.962843 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.963042 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.963074 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.963690 5029 scope.go:117] "RemoveContainer" containerID="faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e" Mar 13 20:27:39 crc kubenswrapper[5029]: E0313 20:27:39.963956 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:27:39 crc kubenswrapper[5029]: I0313 20:27:39.978149 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.168510 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.486904 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.536071 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:40Z is after 2026-02-23T05:33:13Z Mar 13 20:27:40 crc kubenswrapper[5029]: E0313 20:27:40.658690 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.747254 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.747460 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.748308 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.748375 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.748395 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.749093 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.749169 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.749198 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:40 crc kubenswrapper[5029]: I0313 20:27:40.749177 5029 scope.go:117] "RemoveContainer" containerID="faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e" Mar 13 20:27:40 crc kubenswrapper[5029]: E0313 20:27:40.749597 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.200169 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.201892 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.201953 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.201968 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.202026 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:41 crc kubenswrapper[5029]: E0313 20:27:41.205054 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:41Z is after 2026-02-23T05:33:13Z" node="crc" Mar 13 20:27:41 crc kubenswrapper[5029]: E0313 20:27:41.205873 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:41Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.535674 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:41Z is after 2026-02-23T05:33:13Z Mar 13 20:27:41 crc kubenswrapper[5029]: W0313 20:27:41.672369 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:41Z is after 2026-02-23T05:33:13Z Mar 13 20:27:41 crc kubenswrapper[5029]: E0313 20:27:41.672502 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.749376 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.750144 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.750175 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.750184 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:41 crc kubenswrapper[5029]: I0313 20:27:41.750653 5029 scope.go:117] "RemoveContainer" containerID="faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e" Mar 13 20:27:41 crc kubenswrapper[5029]: E0313 20:27:41.750809 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:27:42 crc kubenswrapper[5029]: I0313 20:27:42.532995 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:42Z is after 2026-02-23T05:33:13Z Mar 13 20:27:43 crc kubenswrapper[5029]: I0313 20:27:43.537237 5029 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 20:27:43 crc kubenswrapper[5029]: I0313 20:27:43.538332 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:43Z is after 2026-02-23T05:33:13Z Mar 13 20:27:43 crc kubenswrapper[5029]: E0313 20:27:43.543443 5029 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:44 crc kubenswrapper[5029]: I0313 20:27:44.202898 5029 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 20:27:44 crc kubenswrapper[5029]: I0313 20:27:44.203033 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 20:27:44 crc kubenswrapper[5029]: I0313 20:27:44.535405 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:44Z is after 2026-02-23T05:33:13Z Mar 13 20:27:44 crc kubenswrapper[5029]: E0313 20:27:44.787739 5029 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:44Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c807ffff4532d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,LastTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:27:45 crc kubenswrapper[5029]: I0313 20:27:45.534480 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:45Z is after 2026-02-23T05:33:13Z Mar 13 20:27:45 crc kubenswrapper[5029]: W0313 20:27:45.756141 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:45Z is after 2026-02-23T05:33:13Z Mar 13 20:27:45 crc kubenswrapper[5029]: E0313 20:27:45.756257 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:45Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:46 crc kubenswrapper[5029]: I0313 20:27:46.533384 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:46Z is after 2026-02-23T05:33:13Z Mar 13 20:27:47 crc kubenswrapper[5029]: I0313 20:27:47.533811 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:47Z is after 2026-02-23T05:33:13Z Mar 13 20:27:48 crc kubenswrapper[5029]: W0313 20:27:48.091268 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:48Z is after 2026-02-23T05:33:13Z Mar 13 20:27:48 crc kubenswrapper[5029]: E0313 20:27:48.091390 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:48Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:48 crc kubenswrapper[5029]: I0313 20:27:48.205320 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:48 crc kubenswrapper[5029]: I0313 20:27:48.206587 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:48 crc kubenswrapper[5029]: I0313 20:27:48.206616 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:48 crc kubenswrapper[5029]: I0313 20:27:48.206626 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:48 crc kubenswrapper[5029]: I0313 20:27:48.206648 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:48 crc kubenswrapper[5029]: E0313 20:27:48.209469 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:48Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 13 20:27:48 crc kubenswrapper[5029]: E0313 20:27:48.209601 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:48Z is after 2026-02-23T05:33:13Z" node="crc" Mar 13 20:27:48 crc kubenswrapper[5029]: I0313 20:27:48.533095 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:48Z is after 2026-02-23T05:33:13Z Mar 13 20:27:49 crc kubenswrapper[5029]: W0313 20:27:49.397490 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:49Z is after 2026-02-23T05:33:13Z Mar 13 20:27:49 crc kubenswrapper[5029]: E0313 20:27:49.397561 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 13 20:27:49 crc kubenswrapper[5029]: I0313 20:27:49.534127 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:49Z is after 2026-02-23T05:33:13Z Mar 13 20:27:50 crc kubenswrapper[5029]: I0313 20:27:50.534731 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:50Z is after 2026-02-23T05:33:13Z Mar 13 20:27:50 crc kubenswrapper[5029]: E0313 20:27:50.658926 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:27:51 crc kubenswrapper[5029]: I0313 20:27:51.533980 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:51Z is after 2026-02-23T05:33:13Z Mar 13 20:27:52 crc kubenswrapper[5029]: I0313 20:27:52.535710 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.014634 5029 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:60208->192.168.126.11:10357: read: connection reset by peer" start-of-body= Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.014703 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:60208->192.168.126.11:10357: read: connection reset by peer" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.014766 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.014954 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.016100 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.016127 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.016138 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.016634 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.016786 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55" gracePeriod=30 Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.534372 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:53Z is after 2026-02-23T05:33:13Z Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.599468 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.600934 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.600984 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.600995 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.601643 5029 scope.go:117] "RemoveContainer" containerID="faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.784581 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.784990 5029 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55" exitCode=255 Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.785025 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55"} Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.785053 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7"} Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.785176 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.786096 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.786154 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:53 crc kubenswrapper[5029]: I0313 20:27:53.786167 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.532820 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:54Z is after 2026-02-23T05:33:13Z Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.789415 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.789963 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.792129 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6ee131fb374219b4cb4ec395df2c77d7381e1d92efdc49ddf52daf8431eefea1" exitCode=255 Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.792267 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.792880 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6ee131fb374219b4cb4ec395df2c77d7381e1d92efdc49ddf52daf8431eefea1"} Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.792932 5029 scope.go:117] "RemoveContainer" containerID="faf7af7ab42012ad20c76c019a26a85f505ebec0e6b33125df8ec22a7683f88e" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.793016 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.793373 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.793403 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.793412 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.793686 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.793717 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.793730 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:54 crc kubenswrapper[5029]: I0313 20:27:54.794164 5029 scope.go:117] "RemoveContainer" containerID="6ee131fb374219b4cb4ec395df2c77d7381e1d92efdc49ddf52daf8431eefea1" Mar 13 20:27:54 crc kubenswrapper[5029]: E0313 20:27:54.794120 5029 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:54Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c807ffff4532d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,LastTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:27:54 crc kubenswrapper[5029]: E0313 20:27:54.794336 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:27:55 crc kubenswrapper[5029]: I0313 20:27:55.210367 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:55 crc kubenswrapper[5029]: I0313 20:27:55.211680 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:55 crc kubenswrapper[5029]: I0313 20:27:55.211768 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:55 crc kubenswrapper[5029]: I0313 20:27:55.211784 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:55 crc kubenswrapper[5029]: I0313 20:27:55.211823 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:27:55 crc kubenswrapper[5029]: E0313 20:27:55.213749 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:55Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 13 20:27:55 crc kubenswrapper[5029]: E0313 20:27:55.215687 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:55Z is after 2026-02-23T05:33:13Z" node="crc" Mar 13 20:27:55 crc kubenswrapper[5029]: I0313 20:27:55.533773 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:55Z is after 2026-02-23T05:33:13Z Mar 13 20:27:55 crc kubenswrapper[5029]: I0313 20:27:55.797002 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 13 20:27:56 crc kubenswrapper[5029]: I0313 20:27:56.537340 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:27:56 crc kubenswrapper[5029]: I0313 20:27:56.923833 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:56 crc kubenswrapper[5029]: I0313 20:27:56.924050 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:56 crc kubenswrapper[5029]: I0313 20:27:56.925975 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:56 crc kubenswrapper[5029]: I0313 20:27:56.926018 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:56 crc kubenswrapper[5029]: I0313 20:27:56.926030 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:56 crc kubenswrapper[5029]: I0313 20:27:56.926631 5029 scope.go:117] "RemoveContainer" containerID="6ee131fb374219b4cb4ec395df2c77d7381e1d92efdc49ddf52daf8431eefea1" Mar 13 20:27:56 crc kubenswrapper[5029]: E0313 20:27:56.926844 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:27:57 crc kubenswrapper[5029]: I0313 20:27:57.537941 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:27:58 crc kubenswrapper[5029]: I0313 20:27:58.536110 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:27:58 crc kubenswrapper[5029]: W0313 20:27:58.980738 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 13 20:27:58 crc kubenswrapper[5029]: E0313 20:27:58.980832 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.536634 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.871301 5029 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.889556 5029 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.956080 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.956289 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.958015 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.958086 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.958106 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:27:59 crc kubenswrapper[5029]: I0313 20:27:59.959405 5029 scope.go:117] "RemoveContainer" containerID="6ee131fb374219b4cb4ec395df2c77d7381e1d92efdc49ddf52daf8431eefea1" Mar 13 20:27:59 crc kubenswrapper[5029]: E0313 20:27:59.959821 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:28:00 crc kubenswrapper[5029]: I0313 20:28:00.538030 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:00 crc kubenswrapper[5029]: E0313 20:28:00.659084 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:28:01 crc kubenswrapper[5029]: I0313 20:28:01.202912 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:28:01 crc kubenswrapper[5029]: I0313 20:28:01.203251 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:01 crc kubenswrapper[5029]: I0313 20:28:01.205099 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:01 crc kubenswrapper[5029]: I0313 20:28:01.205174 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:01 crc kubenswrapper[5029]: I0313 20:28:01.205195 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:01 crc kubenswrapper[5029]: I0313 20:28:01.538130 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.216160 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.218015 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.218185 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.218277 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.218393 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:28:02 crc kubenswrapper[5029]: E0313 20:28:02.219211 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 20:28:02 crc kubenswrapper[5029]: E0313 20:28:02.224345 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.537395 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.786702 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.787098 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.789437 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.789521 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:02 crc kubenswrapper[5029]: I0313 20:28:02.789541 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:03 crc kubenswrapper[5029]: W0313 20:28:03.502872 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 13 20:28:03 crc kubenswrapper[5029]: E0313 20:28:03.502918 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 20:28:03 crc kubenswrapper[5029]: I0313 20:28:03.536741 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:04 crc kubenswrapper[5029]: I0313 20:28:04.203095 5029 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 20:28:04 crc kubenswrapper[5029]: I0313 20:28:04.203827 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 20:28:04 crc kubenswrapper[5029]: I0313 20:28:04.535269 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.801170 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c807ffff4532d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,LastTimestamp:2026-03-13 20:27:20.520536877 +0000 UTC m=+0.536619280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.806110 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034d7eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,LastTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.810493 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034de60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,LastTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.815156 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034e17b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576751537 +0000 UTC m=+0.592833950,LastTimestamp:2026-03-13 20:27:20.576751537 +0000 UTC m=+0.592833950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.818951 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c80800886a201 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.664343041 +0000 UTC m=+0.680425484,LastTimestamp:2026-03-13 20:27:20.664343041 +0000 UTC m=+0.680425484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.823348 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034d7eee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034d7eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,LastTimestamp:2026-03-13 20:27:20.69977631 +0000 UTC m=+0.715858713,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.828736 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034de60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034de60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,LastTimestamp:2026-03-13 20:27:20.699797438 +0000 UTC m=+0.715879841,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.834700 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034e17b1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034e17b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576751537 +0000 UTC m=+0.592833950,LastTimestamp:2026-03-13 20:27:20.699811366 +0000 UTC m=+0.715893769,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.838660 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034d7eee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034d7eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,LastTimestamp:2026-03-13 20:27:20.701070002 +0000 UTC m=+0.717152405,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.843211 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034de60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034de60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,LastTimestamp:2026-03-13 20:27:20.701082111 +0000 UTC m=+0.717164514,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.847247 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034d7eee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034d7eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,LastTimestamp:2026-03-13 20:27:20.70109869 +0000 UTC m=+0.717181093,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.851699 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034de60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034de60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,LastTimestamp:2026-03-13 20:27:20.701116128 +0000 UTC m=+0.717198531,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.855447 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034e17b1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034e17b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576751537 +0000 UTC m=+0.592833950,LastTimestamp:2026-03-13 20:27:20.701126737 +0000 UTC m=+0.717209140,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.860133 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034e17b1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034e17b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576751537 +0000 UTC m=+0.592833950,LastTimestamp:2026-03-13 20:27:20.701165314 +0000 UTC m=+0.717247717,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.867659 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034d7eee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034d7eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,LastTimestamp:2026-03-13 20:27:20.701807335 +0000 UTC m=+0.717889738,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.871533 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034de60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034de60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,LastTimestamp:2026-03-13 20:27:20.701817364 +0000 UTC m=+0.717899757,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.875273 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034e17b1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034e17b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576751537 +0000 UTC m=+0.592833950,LastTimestamp:2026-03-13 20:27:20.701825954 +0000 UTC m=+0.717908347,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.880318 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034d7eee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034d7eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,LastTimestamp:2026-03-13 20:27:20.702862531 +0000 UTC m=+0.718944934,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.885702 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034de60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034de60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,LastTimestamp:2026-03-13 20:27:20.702877519 +0000 UTC m=+0.718959922,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.891310 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034e17b1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034e17b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576751537 +0000 UTC m=+0.592833950,LastTimestamp:2026-03-13 20:27:20.702885888 +0000 UTC m=+0.718968281,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.895710 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034d7eee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034d7eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,LastTimestamp:2026-03-13 20:27:20.70298061 +0000 UTC m=+0.719063053,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.900520 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034de60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034de60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,LastTimestamp:2026-03-13 20:27:20.703005998 +0000 UTC m=+0.719088421,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.904782 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034e17b1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034e17b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576751537 +0000 UTC m=+0.592833950,LastTimestamp:2026-03-13 20:27:20.703022106 +0000 UTC m=+0.719104529,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.909244 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034d7eee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034d7eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.57671243 +0000 UTC m=+0.592794853,LastTimestamp:2026-03-13 20:27:20.703541889 +0000 UTC m=+0.719624292,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.913252 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c8080034de60c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c8080034de60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:20.576738828 +0000 UTC m=+0.592821241,LastTimestamp:2026-03-13 20:27:20.703553548 +0000 UTC m=+0.719635951,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.918639 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c808021fcebde openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.091525598 +0000 UTC m=+1.107608001,LastTimestamp:2026-03-13 20:27:21.091525598 +0000 UTC m=+1.107608001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.923158 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c8080235a0415 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.114403861 +0000 UTC m=+1.130486274,LastTimestamp:2026-03-13 20:27:21.114403861 +0000 UTC m=+1.130486274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.927380 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c8080235a4a3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.11442182 +0000 UTC m=+1.130504223,LastTimestamp:2026-03-13 20:27:21.11442182 +0000 UTC m=+1.130504223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.933658 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80802363c7db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.115043803 +0000 UTC m=+1.131126206,LastTimestamp:2026-03-13 20:27:21.115043803 +0000 UTC m=+1.131126206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.937093 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c808023ba04cf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.120695503 +0000 UTC m=+1.136777946,LastTimestamp:2026-03-13 20:27:21.120695503 +0000 UTC m=+1.136777946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.939524 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c808046175e2a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.69723857 +0000 UTC m=+1.713320963,LastTimestamp:2026-03-13 20:27:21.69723857 +0000 UTC m=+1.713320963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.940831 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c8080461cc974 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.697593716 +0000 UTC m=+1.713676119,LastTimestamp:2026-03-13 20:27:21.697593716 +0000 UTC m=+1.713676119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.943424 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c8080461d8545 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.697641797 +0000 UTC m=+1.713724210,LastTimestamp:2026-03-13 20:27:21.697641797 +0000 UTC m=+1.713724210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.944924 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c808046a2f8d2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.706387666 +0000 UTC m=+1.722470069,LastTimestamp:2026-03-13 20:27:21.706387666 +0000 UTC m=+1.722470069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.948695 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c808046a445fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.706472958 +0000 UTC m=+1.722555381,LastTimestamp:2026-03-13 20:27:21.706472958 +0000 UTC m=+1.722555381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.953016 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c808046a736c7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.706665671 +0000 UTC m=+1.722748074,LastTimestamp:2026-03-13 20:27:21.706665671 +0000 UTC m=+1.722748074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.957064 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808046d69546 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.709770054 +0000 UTC m=+1.725852457,LastTimestamp:2026-03-13 20:27:21.709770054 +0000 UTC m=+1.725852457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.961339 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808046e9e2ce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.711035086 +0000 UTC m=+1.727117479,LastTimestamp:2026-03-13 20:27:21.711035086 +0000 UTC m=+1.727117479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.965318 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c808047456826 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.717032998 +0000 UTC m=+1.733115401,LastTimestamp:2026-03-13 20:27:21.717032998 +0000 UTC m=+1.733115401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.969312 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c808047afe88f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.724012687 +0000 UTC m=+1.740095090,LastTimestamp:2026-03-13 20:27:21.724012687 +0000 UTC m=+1.740095090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.973437 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c808047b59565 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.724384613 +0000 UTC m=+1.740467016,LastTimestamp:2026-03-13 20:27:21.724384613 +0000 UTC m=+1.740467016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.977743 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808059da7d43 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.028793155 +0000 UTC m=+2.044875558,LastTimestamp:2026-03-13 20:27:22.028793155 +0000 UTC m=+2.044875558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.981559 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c80805a9afe24 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.04140906 +0000 UTC m=+2.057491463,LastTimestamp:2026-03-13 20:27:22.04140906 +0000 UTC m=+2.057491463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.986485 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c80805aa93440 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.042340416 +0000 UTC m=+2.058422809,LastTimestamp:2026-03-13 20:27:22.042340416 +0000 UTC m=+2.058422809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.991092 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808065fb3ad2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.232265426 +0000 UTC m=+2.248347849,LastTimestamp:2026-03-13 20:27:22.232265426 +0000 UTC m=+2.248347849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.995570 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808066c686de openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.245588702 +0000 UTC m=+2.261671105,LastTimestamp:2026-03-13 20:27:22.245588702 +0000 UTC m=+2.261671105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:04 crc kubenswrapper[5029]: E0313 20:28:04.999435 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808066d5b2ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.24658297 +0000 UTC m=+2.262665383,LastTimestamp:2026-03-13 20:27:22.24658297 +0000 UTC m=+2.262665383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.009834 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c80806fe30b3c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.39845254 +0000 UTC m=+2.414534943,LastTimestamp:2026-03-13 20:27:22.39845254 +0000 UTC m=+2.414534943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.014171 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808070a232c1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.410980033 +0000 UTC m=+2.427062426,LastTimestamp:2026-03-13 20:27:22.410980033 +0000 UTC m=+2.427062426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.018629 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80807d7814b4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.626323636 +0000 UTC m=+2.642406069,LastTimestamp:2026-03-13 20:27:22.626323636 +0000 UTC m=+2.642406069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.023042 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c80807d9cbf14 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.628726548 +0000 UTC m=+2.644808981,LastTimestamp:2026-03-13 20:27:22.628726548 +0000 UTC m=+2.644808981,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.027200 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c80807e0ed0b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.636202165 +0000 UTC m=+2.652284578,LastTimestamp:2026-03-13 20:27:22.636202165 +0000 UTC m=+2.652284578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.031269 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c80807ec6bd4d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.648255821 +0000 UTC m=+2.664338224,LastTimestamp:2026-03-13 20:27:22.648255821 +0000 UTC m=+2.664338224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.035206 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c80808be83fa8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.868555688 +0000 UTC m=+2.884638091,LastTimestamp:2026-03-13 20:27:22.868555688 +0000 UTC m=+2.884638091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.039105 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80808be8d405 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.868593669 +0000 UTC m=+2.884676062,LastTimestamp:2026-03-13 20:27:22.868593669 +0000 UTC m=+2.884676062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.042497 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c80808beb19fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.868742652 +0000 UTC m=+2.884825055,LastTimestamp:2026-03-13 20:27:22.868742652 +0000 UTC m=+2.884825055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.046668 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c80808bedaa84 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.868910724 +0000 UTC m=+2.884993127,LastTimestamp:2026-03-13 20:27:22.868910724 +0000 UTC m=+2.884993127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.050645 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c80808ce59314 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.885157652 +0000 UTC m=+2.901240055,LastTimestamp:2026-03-13 20:27:22.885157652 +0000 UTC m=+2.901240055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.055020 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c80808ce5f5b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.885182902 +0000 UTC m=+2.901265305,LastTimestamp:2026-03-13 20:27:22.885182902 +0000 UTC m=+2.901265305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.059916 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c80808cf6252e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.88624363 +0000 UTC m=+2.902326023,LastTimestamp:2026-03-13 20:27:22.88624363 +0000 UTC m=+2.902326023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.063969 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c80808d0ad459 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.887599193 +0000 UTC m=+2.903681596,LastTimestamp:2026-03-13 20:27:22.887599193 +0000 UTC m=+2.903681596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.068747 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80808d248984 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.889283972 +0000 UTC m=+2.905366375,LastTimestamp:2026-03-13 20:27:22.889283972 +0000 UTC m=+2.905366375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.073344 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c80808d84eb05 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.895600389 +0000 UTC m=+2.911682792,LastTimestamp:2026-03-13 20:27:22.895600389 +0000 UTC m=+2.911682792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.079265 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c80809b137665 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.123045989 +0000 UTC m=+3.139128392,LastTimestamp:2026-03-13 20:27:23.123045989 +0000 UTC m=+3.139128392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.083633 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c80809b2a7b9e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.124554654 +0000 UTC m=+3.140637057,LastTimestamp:2026-03-13 20:27:23.124554654 +0000 UTC m=+3.140637057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.087633 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c80809bc10a27 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.134421543 +0000 UTC m=+3.150503946,LastTimestamp:2026-03-13 20:27:23.134421543 +0000 UTC m=+3.150503946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.094568 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c80809bd07bd0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.13543368 +0000 UTC m=+3.151516083,LastTimestamp:2026-03-13 20:27:23.13543368 +0000 UTC m=+3.151516083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.100929 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c80809be0ff12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.136515858 +0000 UTC m=+3.152598281,LastTimestamp:2026-03-13 20:27:23.136515858 +0000 UTC m=+3.152598281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.105874 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c80809bf35c53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.137719379 +0000 UTC m=+3.153801792,LastTimestamp:2026-03-13 20:27:23.137719379 +0000 UTC m=+3.153801792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.110407 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080a69b27e7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.316488167 +0000 UTC m=+3.332570570,LastTimestamp:2026-03-13 20:27:23.316488167 +0000 UTC m=+3.332570570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.114951 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c8080a6a57420 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.31716304 +0000 UTC m=+3.333245443,LastTimestamp:2026-03-13 20:27:23.31716304 +0000 UTC m=+3.333245443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.119928 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c8080a7a06193 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.333607827 +0000 UTC m=+3.349690240,LastTimestamp:2026-03-13 20:27:23.333607827 +0000 UTC m=+3.349690240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.127559 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080a7da7eee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.33741643 +0000 UTC m=+3.353498833,LastTimestamp:2026-03-13 20:27:23.33741643 +0000 UTC m=+3.353498833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.133758 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080a7ecdebc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.338620604 +0000 UTC m=+3.354703017,LastTimestamp:2026-03-13 20:27:23.338620604 +0000 UTC m=+3.354703017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.139004 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080b5a1525a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.56855049 +0000 UTC m=+3.584632893,LastTimestamp:2026-03-13 20:27:23.56855049 +0000 UTC m=+3.584632893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.143445 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080b68dd7ff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.584051199 +0000 UTC m=+3.600133602,LastTimestamp:2026-03-13 20:27:23.584051199 +0000 UTC m=+3.600133602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.144882 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080b6b4ea9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.586611869 +0000 UTC m=+3.602694272,LastTimestamp:2026-03-13 20:27:23.586611869 +0000 UTC m=+3.602694272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.150428 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c8080bb13fc59 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.659951193 +0000 UTC m=+3.676033636,LastTimestamp:2026-03-13 20:27:23.659951193 +0000 UTC m=+3.676033636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.155444 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080c568caea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.833281258 +0000 UTC m=+3.849363661,LastTimestamp:2026-03-13 20:27:23.833281258 +0000 UTC m=+3.849363661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.160055 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080c6358205 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.846697477 +0000 UTC m=+3.862779880,LastTimestamp:2026-03-13 20:27:23.846697477 +0000 UTC m=+3.862779880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.164294 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c8080c7a4ec7a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.870776442 +0000 UTC m=+3.886858845,LastTimestamp:2026-03-13 20:27:23.870776442 +0000 UTC m=+3.886858845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.168960 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c8080c84adc3d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.881651261 +0000 UTC m=+3.897733664,LastTimestamp:2026-03-13 20:27:23.881651261 +0000 UTC m=+3.897733664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.174806 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c8080f74e755d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:24.670416221 +0000 UTC m=+4.686498624,LastTimestamp:2026-03-13 20:27:24.670416221 +0000 UTC m=+4.686498624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.179252 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c808102ea2238 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:24.865167928 +0000 UTC m=+4.881250331,LastTimestamp:2026-03-13 20:27:24.865167928 +0000 UTC m=+4.881250331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.183338 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c808103509c3e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:24.871883838 +0000 UTC m=+4.887966241,LastTimestamp:2026-03-13 20:27:24.871883838 +0000 UTC m=+4.887966241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.189246 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80810362710e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:24.87305243 +0000 UTC m=+4.889134833,LastTimestamp:2026-03-13 20:27:24.87305243 +0000 UTC m=+4.889134833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.196292 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80810f4398d2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.072357586 +0000 UTC m=+5.088439989,LastTimestamp:2026-03-13 20:27:25.072357586 +0000 UTC m=+5.088439989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.202760 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80810fd70a0b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.082020363 +0000 UTC m=+5.098102766,LastTimestamp:2026-03-13 20:27:25.082020363 +0000 UTC m=+5.098102766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.207229 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80810fe31800 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.082810368 +0000 UTC m=+5.098892771,LastTimestamp:2026-03-13 20:27:25.082810368 +0000 UTC m=+5.098892771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.211511 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80811c35bc8d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.289553037 +0000 UTC m=+5.305635440,LastTimestamp:2026-03-13 20:27:25.289553037 +0000 UTC m=+5.305635440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.217692 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80811d338ed6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.306187478 +0000 UTC m=+5.322269881,LastTimestamp:2026-03-13 20:27:25.306187478 +0000 UTC m=+5.322269881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.221681 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80811d415469 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.307090025 +0000 UTC m=+5.323172428,LastTimestamp:2026-03-13 20:27:25.307090025 +0000 UTC m=+5.323172428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.225788 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c808127c7eff8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.483683832 +0000 UTC m=+5.499766235,LastTimestamp:2026-03-13 20:27:25.483683832 +0000 UTC m=+5.499766235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.230367 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c808128a0bb02 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.497891586 +0000 UTC m=+5.513973979,LastTimestamp:2026-03-13 20:27:25.497891586 +0000 UTC m=+5.513973979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.236769 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c808128b69b6e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.499325294 +0000 UTC m=+5.515407747,LastTimestamp:2026-03-13 20:27:25.499325294 +0000 UTC m=+5.515407747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.243953 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c80813550bc0f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.710752783 +0000 UTC m=+5.726835186,LastTimestamp:2026-03-13 20:27:25.710752783 +0000 UTC m=+5.726835186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.250001 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c8081360a83bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:25.722928059 +0000 UTC m=+5.739010452,LastTimestamp:2026-03-13 20:27:25.722928059 +0000 UTC m=+5.739010452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.252778 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 13 20:28:05 crc kubenswrapper[5029]: &Event{ObjectMeta:{kube-controller-manager-crc.189c80832f873410 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 13 20:28:05 crc kubenswrapper[5029]: body: Mar 13 20:28:05 crc kubenswrapper[5029]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.203593744 +0000 UTC m=+14.219676147,LastTimestamp:2026-03-13 20:27:34.203593744 +0000 UTC m=+14.219676147,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 20:28:05 crc kubenswrapper[5029]: > Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.257284 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c80832f883344 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.203659076 +0000 UTC m=+14.219741479,LastTimestamp:2026-03-13 20:27:34.203659076 +0000 UTC m=+14.219741479,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.261965 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c8080b6b4ea9d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080b6b4ea9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.586611869 +0000 UTC m=+3.602694272,LastTimestamp:2026-03-13 20:27:34.716998191 +0000 UTC m=+14.733080604,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.266160 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 13 20:28:05 crc kubenswrapper[5029]: &Event{ObjectMeta:{kube-apiserver-crc.189c80835303fe13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 13 20:28:05 crc kubenswrapper[5029]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 13 20:28:05 crc kubenswrapper[5029]: Mar 13 20:28:05 crc kubenswrapper[5029]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.798974483 +0000 UTC m=+14.815056896,LastTimestamp:2026-03-13 20:27:34.798974483 +0000 UTC m=+14.815056896,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 20:28:05 crc kubenswrapper[5029]: > Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.272062 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8083530a4634 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.799386164 +0000 UTC m=+14.815468577,LastTimestamp:2026-03-13 20:27:34.799386164 +0000 UTC m=+14.815468577,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.277076 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c80835303fe13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 13 20:28:05 crc kubenswrapper[5029]: &Event{ObjectMeta:{kube-apiserver-crc.189c80835303fe13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 13 20:28:05 crc kubenswrapper[5029]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 13 20:28:05 crc kubenswrapper[5029]: Mar 13 20:28:05 crc kubenswrapper[5029]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.798974483 +0000 UTC m=+14.815056896,LastTimestamp:2026-03-13 20:27:34.805278222 +0000 UTC m=+14.821360625,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 20:28:05 crc kubenswrapper[5029]: > Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.282073 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c8083530a4634\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8083530a4634 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.799386164 +0000 UTC m=+14.815468577,LastTimestamp:2026-03-13 20:27:34.805318713 +0000 UTC m=+14.821401116,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.287896 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c8080c568caea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080c568caea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.833281258 +0000 UTC m=+3.849363661,LastTimestamp:2026-03-13 20:27:34.92477611 +0000 UTC m=+14.940858523,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.291914 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c8080c6358205\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c8080c6358205 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:23.846697477 +0000 UTC m=+3.862779880,LastTimestamp:2026-03-13 20:27:34.937379139 +0000 UTC m=+14.953461532,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.297065 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c80832f873410\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 13 20:28:05 crc kubenswrapper[5029]: &Event{ObjectMeta:{kube-controller-manager-crc.189c80832f873410 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 13 20:28:05 crc kubenswrapper[5029]: body: Mar 13 20:28:05 crc kubenswrapper[5029]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.203593744 +0000 UTC m=+14.219676147,LastTimestamp:2026-03-13 20:27:44.20299371 +0000 UTC m=+24.219076113,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 20:28:05 crc kubenswrapper[5029]: > Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.301275 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c80832f883344\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c80832f883344 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.203659076 +0000 UTC m=+14.219741479,LastTimestamp:2026-03-13 20:27:44.203081962 +0000 UTC m=+24.219164355,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.305395 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 13 20:28:05 crc kubenswrapper[5029]: &Event{ObjectMeta:{kube-controller-manager-crc.189c808790c1a976 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:60208->192.168.126.11:10357: read: connection reset by peer Mar 13 20:28:05 crc kubenswrapper[5029]: body: Mar 13 20:28:05 crc kubenswrapper[5029]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:53.014684022 +0000 UTC m=+33.030766425,LastTimestamp:2026-03-13 20:27:53.014684022 +0000 UTC m=+33.030766425,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 20:28:05 crc kubenswrapper[5029]: > Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.310088 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808790c26a6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:60208->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:53.014733423 +0000 UTC m=+33.030815826,LastTimestamp:2026-03-13 20:27:53.014733423 +0000 UTC m=+33.030815826,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.314830 5029 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808790e192d2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:53.016775378 +0000 UTC m=+33.032857791,LastTimestamp:2026-03-13 20:27:53.016775378 +0000 UTC m=+33.032857791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.319338 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c808046e9e2ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808046e9e2ce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:21.711035086 +0000 UTC m=+1.727117479,LastTimestamp:2026-03-13 20:27:53.532361673 +0000 UTC m=+33.548444076,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.323836 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c808059da7d43\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c808059da7d43 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.028793155 +0000 UTC m=+2.044875558,LastTimestamp:2026-03-13 20:27:53.733191186 +0000 UTC m=+33.749273589,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.328678 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c80805a9afe24\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c80805a9afe24 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:22.04140906 +0000 UTC m=+2.057491463,LastTimestamp:2026-03-13 20:27:53.743495473 +0000 UTC m=+33.759577876,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.335827 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c80832f873410\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 13 20:28:05 crc kubenswrapper[5029]: &Event{ObjectMeta:{kube-controller-manager-crc.189c80832f873410 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 13 20:28:05 crc kubenswrapper[5029]: body: Mar 13 20:28:05 crc kubenswrapper[5029]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.203593744 +0000 UTC m=+14.219676147,LastTimestamp:2026-03-13 20:28:04.203801174 +0000 UTC m=+44.219883597,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 20:28:05 crc kubenswrapper[5029]: > Mar 13 20:28:05 crc kubenswrapper[5029]: E0313 20:28:05.340266 5029 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c80832f883344\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c80832f883344 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:27:34.203659076 +0000 UTC m=+14.219741479,LastTimestamp:2026-03-13 20:28:04.203989949 +0000 UTC m=+44.220072362,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 13 20:28:05 crc kubenswrapper[5029]: I0313 20:28:05.538623 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:06 crc kubenswrapper[5029]: I0313 20:28:06.536183 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:07 crc kubenswrapper[5029]: I0313 20:28:07.535724 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:08 crc kubenswrapper[5029]: I0313 20:28:08.535684 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.064885 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.065321 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.067218 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.067246 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.067259 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.225036 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:09 crc kubenswrapper[5029]: E0313 20:28:09.225720 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.227065 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.227196 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.227307 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.227894 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:28:09 crc kubenswrapper[5029]: E0313 20:28:09.235599 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 20:28:09 crc kubenswrapper[5029]: I0313 20:28:09.536071 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:10 crc kubenswrapper[5029]: I0313 20:28:10.539106 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:10 crc kubenswrapper[5029]: I0313 20:28:10.599137 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:10 crc kubenswrapper[5029]: I0313 20:28:10.600510 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:10 crc kubenswrapper[5029]: I0313 20:28:10.600702 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:10 crc kubenswrapper[5029]: I0313 20:28:10.600832 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:10 crc kubenswrapper[5029]: I0313 20:28:10.601818 5029 scope.go:117] "RemoveContainer" containerID="6ee131fb374219b4cb4ec395df2c77d7381e1d92efdc49ddf52daf8431eefea1" Mar 13 20:28:10 crc kubenswrapper[5029]: E0313 20:28:10.602157 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:28:10 crc kubenswrapper[5029]: E0313 20:28:10.659358 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.206173 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.206389 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.208259 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.208309 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.208342 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.209919 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:28:11 crc kubenswrapper[5029]: W0313 20:28:11.363659 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 13 20:28:11 crc kubenswrapper[5029]: E0313 20:28:11.363737 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.536917 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:11 crc kubenswrapper[5029]: W0313 20:28:11.765555 5029 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 13 20:28:11 crc kubenswrapper[5029]: E0313 20:28:11.765648 5029 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.844971 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.846024 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.846054 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:11 crc kubenswrapper[5029]: I0313 20:28:11.846062 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:12 crc kubenswrapper[5029]: I0313 20:28:12.534727 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:13 crc kubenswrapper[5029]: I0313 20:28:13.533432 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:14 crc kubenswrapper[5029]: I0313 20:28:14.537623 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:15 crc kubenswrapper[5029]: I0313 20:28:15.537022 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:16 crc kubenswrapper[5029]: E0313 20:28:16.232086 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 20:28:16 crc kubenswrapper[5029]: I0313 20:28:16.236489 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:16 crc kubenswrapper[5029]: I0313 20:28:16.238458 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:16 crc kubenswrapper[5029]: I0313 20:28:16.238503 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:16 crc kubenswrapper[5029]: I0313 20:28:16.238515 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:16 crc kubenswrapper[5029]: I0313 20:28:16.238540 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:28:16 crc kubenswrapper[5029]: E0313 20:28:16.246018 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 20:28:16 crc kubenswrapper[5029]: I0313 20:28:16.536569 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:17 crc kubenswrapper[5029]: I0313 20:28:17.534733 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:18 crc kubenswrapper[5029]: I0313 20:28:18.536760 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:19 crc kubenswrapper[5029]: I0313 20:28:19.533354 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:20 crc kubenswrapper[5029]: I0313 20:28:20.538068 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:20 crc kubenswrapper[5029]: E0313 20:28:20.660169 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.536185 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.599235 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.601838 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.601900 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.601917 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.602525 5029 scope.go:117] "RemoveContainer" containerID="6ee131fb374219b4cb4ec395df2c77d7381e1d92efdc49ddf52daf8431eefea1" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.877827 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.883027 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c"} Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.883196 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.884335 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.884363 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:21 crc kubenswrapper[5029]: I0313 20:28:21.884373 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.534657 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.887292 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.887762 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.889718 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c" exitCode=255 Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.889779 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c"} Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.889845 5029 scope.go:117] "RemoveContainer" containerID="6ee131fb374219b4cb4ec395df2c77d7381e1d92efdc49ddf52daf8431eefea1" Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.890043 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.891194 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.891237 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.891251 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:22 crc kubenswrapper[5029]: I0313 20:28:22.891835 5029 scope.go:117] "RemoveContainer" containerID="fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c" Mar 13 20:28:22 crc kubenswrapper[5029]: E0313 20:28:22.892104 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:28:23 crc kubenswrapper[5029]: E0313 20:28:23.238516 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 20:28:23 crc kubenswrapper[5029]: I0313 20:28:23.247223 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:23 crc kubenswrapper[5029]: I0313 20:28:23.248660 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:23 crc kubenswrapper[5029]: I0313 20:28:23.248703 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:23 crc kubenswrapper[5029]: I0313 20:28:23.248719 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:23 crc kubenswrapper[5029]: I0313 20:28:23.248757 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:28:23 crc kubenswrapper[5029]: E0313 20:28:23.257805 5029 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 13 20:28:23 crc kubenswrapper[5029]: I0313 20:28:23.536102 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:23 crc kubenswrapper[5029]: I0313 20:28:23.895929 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 13 20:28:24 crc kubenswrapper[5029]: I0313 20:28:24.531201 5029 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 20:28:24 crc kubenswrapper[5029]: I0313 20:28:24.908043 5029 csr.go:261] certificate signing request csr-vscrr is approved, waiting to be issued Mar 13 20:28:24 crc kubenswrapper[5029]: I0313 20:28:24.916998 5029 csr.go:257] certificate signing request csr-vscrr is issued Mar 13 20:28:24 crc kubenswrapper[5029]: I0313 20:28:24.958840 5029 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 20:28:25 crc kubenswrapper[5029]: I0313 20:28:25.385116 5029 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 13 20:28:25 crc kubenswrapper[5029]: I0313 20:28:25.917910 5029 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-19 20:18:40.574816245 +0000 UTC Mar 13 20:28:25 crc kubenswrapper[5029]: I0313 20:28:25.917982 5029 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6023h50m14.656836968s for next certificate rotation Mar 13 20:28:26 crc kubenswrapper[5029]: I0313 20:28:26.924716 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:28:26 crc kubenswrapper[5029]: I0313 20:28:26.924940 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:26 crc kubenswrapper[5029]: I0313 20:28:26.926236 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:26 crc kubenswrapper[5029]: I0313 20:28:26.926280 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:26 crc kubenswrapper[5029]: I0313 20:28:26.926291 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:26 crc kubenswrapper[5029]: I0313 20:28:26.926958 5029 scope.go:117] "RemoveContainer" containerID="fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c" Mar 13 20:28:26 crc kubenswrapper[5029]: E0313 20:28:26.927127 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:28:29 crc kubenswrapper[5029]: I0313 20:28:29.513471 5029 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 20:28:29 crc kubenswrapper[5029]: I0313 20:28:29.955446 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:28:29 crc kubenswrapper[5029]: I0313 20:28:29.955661 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:29 crc kubenswrapper[5029]: I0313 20:28:29.956874 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:29 crc kubenswrapper[5029]: I0313 20:28:29.957061 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:29 crc kubenswrapper[5029]: I0313 20:28:29.957127 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:29 crc kubenswrapper[5029]: I0313 20:28:29.957829 5029 scope.go:117] "RemoveContainer" containerID="fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c" Mar 13 20:28:29 crc kubenswrapper[5029]: E0313 20:28:29.958143 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.258992 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.261072 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.261125 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.261139 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.261299 5029 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.271989 5029 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.272308 5029 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.272347 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.276971 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.277011 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.277025 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.277044 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.277056 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:30Z","lastTransitionTime":"2026-03-13T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.292198 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.297020 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.297278 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.297371 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.297447 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.297516 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:30Z","lastTransitionTime":"2026-03-13T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.309998 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.314797 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.314884 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.314899 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.314925 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.314944 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:30Z","lastTransitionTime":"2026-03-13T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.327057 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.331326 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.331440 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.331524 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.331603 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:30 crc kubenswrapper[5029]: I0313 20:28:30.331684 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:30Z","lastTransitionTime":"2026-03-13T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.343070 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.343195 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.343235 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.443911 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.544551 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.645377 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.660695 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.745703 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.846464 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:30 crc kubenswrapper[5029]: E0313 20:28:30.947010 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.047472 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.148518 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.249505 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.350344 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.451514 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.551645 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.651830 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.752941 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.853989 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:31 crc kubenswrapper[5029]: E0313 20:28:31.954589 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.054994 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.156045 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.256350 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.357103 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.458128 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.558485 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.659182 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.759483 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.860541 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:32 crc kubenswrapper[5029]: E0313 20:28:32.961124 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.061683 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.162796 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.263022 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.364110 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.464255 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.565485 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.665729 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.766641 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.867718 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:33 crc kubenswrapper[5029]: E0313 20:28:33.968246 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.068845 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.169559 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.270489 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.370793 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.471924 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.572790 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.673886 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.774662 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.875591 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:34 crc kubenswrapper[5029]: E0313 20:28:34.976717 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.077481 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.178516 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.278998 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.380214 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.480620 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.580923 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.681290 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.782440 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.883099 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:35 crc kubenswrapper[5029]: E0313 20:28:35.983779 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.083985 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.185309 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.285862 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.386667 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.487078 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.587779 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.688828 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.789482 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.890610 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:36 crc kubenswrapper[5029]: E0313 20:28:36.991196 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.092480 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.193410 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.294664 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.395570 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.495847 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.597129 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.698717 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.799613 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:37 crc kubenswrapper[5029]: E0313 20:28:37.900147 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.001302 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.101627 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.202674 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.303573 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.404068 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.504644 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.605500 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.706256 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.807250 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:38 crc kubenswrapper[5029]: E0313 20:28:38.908082 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.009121 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.110005 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.211149 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.311985 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.412959 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.513133 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.613289 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.714112 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.814695 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:39 crc kubenswrapper[5029]: E0313 20:28:39.915746 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.016764 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.117371 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.217628 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.318375 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.418670 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.462940 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.469038 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.469100 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.469118 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.469148 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.469168 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:40Z","lastTransitionTime":"2026-03-13T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.485438 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.489887 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.489926 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.489949 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.489970 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.489986 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:40Z","lastTransitionTime":"2026-03-13T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.507070 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.513447 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.513478 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.513488 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.513504 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.513515 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:40Z","lastTransitionTime":"2026-03-13T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.521759 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.525971 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.526108 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.526199 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.526301 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:40 crc kubenswrapper[5029]: I0313 20:28:40.526392 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:40Z","lastTransitionTime":"2026-03-13T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.567338 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.567534 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.567555 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.660902 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.667687 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.768411 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.869530 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:40 crc kubenswrapper[5029]: E0313 20:28:40.970710 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.071347 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.171792 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.272476 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.372948 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.473476 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.574608 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.675353 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.775635 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.876816 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:41 crc kubenswrapper[5029]: E0313 20:28:41.977947 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.078071 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.178898 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.279938 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.380842 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: I0313 20:28:42.396811 5029 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.481131 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.581308 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.682497 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.782656 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.883553 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:42 crc kubenswrapper[5029]: E0313 20:28:42.983894 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.084375 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.184828 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.285115 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.386275 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.487379 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.587916 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.688931 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.789078 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.890097 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:43 crc kubenswrapper[5029]: E0313 20:28:43.990893 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.091273 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.192405 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.293338 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.393535 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.494452 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.595633 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.599017 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.599049 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.600242 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.600274 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.600285 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.600294 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.600319 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.600333 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:44 crc kubenswrapper[5029]: I0313 20:28:44.601049 5029 scope.go:117] "RemoveContainer" containerID="fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.601242 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.696054 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.796698 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.897874 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:44 crc kubenswrapper[5029]: E0313 20:28:44.998902 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.099007 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.200154 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.300999 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.401328 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.502483 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.603835 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.705202 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.805581 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:45 crc kubenswrapper[5029]: E0313 20:28:45.906425 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.007703 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.108118 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.208522 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.308874 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.410264 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.510333 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: I0313 20:28:46.514228 5029 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 20:28:46 crc kubenswrapper[5029]: I0313 20:28:46.598586 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:46 crc kubenswrapper[5029]: I0313 20:28:46.599734 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:46 crc kubenswrapper[5029]: I0313 20:28:46.599884 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:46 crc kubenswrapper[5029]: I0313 20:28:46.599968 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.611693 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.712863 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.813946 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:46 crc kubenswrapper[5029]: E0313 20:28:46.915135 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.015774 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.116959 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.217399 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.318489 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.419446 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.519727 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.620686 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.720891 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.821876 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:47 crc kubenswrapper[5029]: E0313 20:28:47.922940 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.023680 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.124334 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.225523 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.326649 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.427255 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.527580 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.628215 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.728636 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.829890 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:48 crc kubenswrapper[5029]: E0313 20:28:48.930491 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.031665 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.132781 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.233805 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.334797 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.434967 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.535606 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.635731 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.736771 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.837960 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:49 crc kubenswrapper[5029]: E0313 20:28:49.938836 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.039280 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.139449 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.240089 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.340615 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.441581 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.542447 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.642933 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.662161 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.742744 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.747956 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.748003 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.748016 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.748035 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.748051 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:50Z","lastTransitionTime":"2026-03-13T20:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.758814 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.763139 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.763243 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.763263 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.763288 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.763306 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:50Z","lastTransitionTime":"2026-03-13T20:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.775079 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.778695 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.778726 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.778739 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.778754 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.778766 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:50Z","lastTransitionTime":"2026-03-13T20:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.792082 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.796170 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.796222 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.796235 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.796268 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:28:50 crc kubenswrapper[5029]: I0313 20:28:50.796282 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:28:50Z","lastTransitionTime":"2026-03-13T20:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.806483 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.806823 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.806904 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:50 crc kubenswrapper[5029]: E0313 20:28:50.907983 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.008873 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.110026 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.210679 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.311645 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.412451 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.512749 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.613966 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.714468 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.814650 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:51 crc kubenswrapper[5029]: E0313 20:28:51.915520 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.015920 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.116832 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.217457 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.318336 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.418614 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.519185 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.619654 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.720769 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.821480 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:52 crc kubenswrapper[5029]: E0313 20:28:52.922632 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.022822 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.123791 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.224290 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.324610 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.425745 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.526577 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.627036 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.727576 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.827889 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:53 crc kubenswrapper[5029]: E0313 20:28:53.928295 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.028951 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.130054 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.230808 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.330957 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.431121 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.532120 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.632875 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.733790 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.834283 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:54 crc kubenswrapper[5029]: E0313 20:28:54.935237 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.035743 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.136650 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.237547 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.338446 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.439321 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.539721 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.640877 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.741839 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.842726 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:55 crc kubenswrapper[5029]: E0313 20:28:55.943779 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.044077 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.144903 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.245169 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.345423 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.446046 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.546281 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.646770 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.746924 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.847919 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:56 crc kubenswrapper[5029]: E0313 20:28:56.948724 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.049915 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.151079 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.251905 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.353164 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.453472 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.554507 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.655118 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.755637 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.856817 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:57 crc kubenswrapper[5029]: E0313 20:28:57.957627 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.057983 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.158222 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.258944 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.359075 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.459557 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.560345 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.660927 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.762024 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.862779 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:58 crc kubenswrapper[5029]: E0313 20:28:58.963910 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.064063 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.164561 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.265744 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.366229 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.467300 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.567996 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: I0313 20:28:59.598474 5029 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 20:28:59 crc kubenswrapper[5029]: I0313 20:28:59.599672 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:28:59 crc kubenswrapper[5029]: I0313 20:28:59.599719 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:28:59 crc kubenswrapper[5029]: I0313 20:28:59.599733 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:28:59 crc kubenswrapper[5029]: I0313 20:28:59.600403 5029 scope.go:117] "RemoveContainer" containerID="fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.600591 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.668621 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.769656 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.870346 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:28:59 crc kubenswrapper[5029]: E0313 20:28:59.970742 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.071491 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.171660 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.272712 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.373671 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.474165 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.575047 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.662719 5029 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.675398 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: E0313 20:29:00.775915 5029 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.869911 5029 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.878296 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.878350 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.878362 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.878378 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.878392 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:00Z","lastTransitionTime":"2026-03-13T20:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.980927 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.980966 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.980974 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.980986 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:00 crc kubenswrapper[5029]: I0313 20:29:00.980996 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:00Z","lastTransitionTime":"2026-03-13T20:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.049092 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.049136 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.049145 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.049162 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.049172 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.058965 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.061926 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.061956 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.061964 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.061978 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.061987 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.072009 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.075153 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.075192 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.075203 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.075222 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.075233 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.089719 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.094118 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.094151 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.094162 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.094180 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.094193 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.106242 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.109422 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.109476 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.109489 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.109509 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.109522 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.119461 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.119677 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.121378 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.121417 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.121426 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.121444 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.121455 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.223654 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.223712 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.223728 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.223745 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.223759 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.326548 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.326579 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.326590 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.326603 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.326614 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.429008 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.429065 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.429079 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.429098 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.429110 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.531009 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.531053 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.531061 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.531075 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.531084 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.564768 5029 apiserver.go:52] "Watching apiserver" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.569913 5029 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.570376 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c","openshift-ovn-kubernetes/ovnkube-node-v2xrv","openshift-dns/node-resolver-5xkjw","openshift-machine-config-operator/machine-config-daemon-28st2","openshift-multus/multus-2thxr","openshift-multus/multus-additional-cni-plugins-zrq2k","openshift-network-diagnostics/network-check-target-xd92c","openshift-image-registry/node-ca-jflsf","openshift-multus/network-metrics-daemon-frlln","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.570762 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.570861 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.570917 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.570936 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.570986 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.571151 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.571199 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.571401 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.571414 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.571448 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.573138 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.573473 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.573540 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.574175 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.575001 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.574252 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.574203 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5xkjw" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.574256 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.575752 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.576203 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.576225 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.576248 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.576467 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.576548 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.576758 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.577342 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.577529 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.577735 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.577880 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.578066 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.579380 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.579665 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.580217 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.580241 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.580307 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.580430 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.580434 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.580672 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.580724 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.581044 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.581108 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.581404 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.581780 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.581790 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.582773 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.582839 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.583206 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.583288 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.583330 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.583461 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.584353 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.585066 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.585236 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.585419 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.587831 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.601623 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.615085 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.628310 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.633282 5029 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.633731 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.633755 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.633764 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.633779 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.633789 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.639698 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.652437 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.663134 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.678786 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.696232 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.698819 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.698906 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.698933 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.698951 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.698969 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.698985 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699001 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699019 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699037 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699052 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699067 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699083 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699102 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699127 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699149 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699197 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699224 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699245 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699263 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699282 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699298 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699318 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699337 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699358 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699376 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699397 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699415 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699516 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699536 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699548 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699556 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699610 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699636 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699656 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699672 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699688 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699706 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699720 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699736 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699752 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699770 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699827 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699846 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699882 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699898 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.699924 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700077 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700165 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700191 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700209 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700226 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700245 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700262 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700279 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700295 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700367 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700484 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700524 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700716 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700756 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700925 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700667 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.700951 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701017 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701039 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701060 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701081 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701102 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701134 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701151 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701170 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701186 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701206 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701225 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701245 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701262 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701280 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701298 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701314 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701333 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701352 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701373 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701394 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701410 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701430 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701449 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701464 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701528 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701548 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701570 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701593 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701616 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701644 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701665 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701683 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701699 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701717 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701734 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701751 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701792 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701817 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701842 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701879 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701917 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701969 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701988 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702003 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702021 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702038 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702055 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702071 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702088 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702105 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702123 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702139 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702761 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701335 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.707877 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701364 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.701561 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702283 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702482 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.702504 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.702778 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:29:02.202751739 +0000 UTC m=+102.218834152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703127 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703183 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703234 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703273 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703557 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703567 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703577 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.707949 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703617 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.704144 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.704188 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.704166 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.705244 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.705308 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.705343 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.705348 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.705525 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.706250 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.706382 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.706651 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.706800 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.703751 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.706928 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.707409 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.707431 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.708179 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.706811 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.708215 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.708213 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.708407 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.708424 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.708467 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.708238 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709188 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709244 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709263 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709283 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709348 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709389 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709426 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709452 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709484 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709508 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709518 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.708528 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709548 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709611 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709659 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709710 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709760 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709809 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709821 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709857 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709967 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710013 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710050 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710089 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710127 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710164 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710198 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710234 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710273 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710307 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710341 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710385 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710435 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710486 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710535 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710597 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710646 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710683 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710719 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710757 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710795 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710829 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710906 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710983 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711041 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711081 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711136 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711193 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711238 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711281 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711316 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711350 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711392 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711448 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711501 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711552 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711598 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711649 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709875 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709900 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711710 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711761 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.709997 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711806 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711813 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711813 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712028 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712070 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712108 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712133 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712152 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712173 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712244 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712266 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712290 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712315 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712340 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712362 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712388 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712587 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712610 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712632 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712655 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712675 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712697 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712719 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712740 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712760 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712784 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712810 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712830 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712854 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712893 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712914 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712933 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712957 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712981 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713005 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713030 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711013 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713054 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713138 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-etc-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713178 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713204 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-cnibin\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713223 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-os-release\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713246 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713270 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-cnibin\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713291 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713311 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-netd\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713340 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713365 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713417 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmv8\" (UniqueName: \"kubernetes.io/projected/9aa07f40-f2db-461a-871b-85f3693e9069-kube-api-access-dfmv8\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713436 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-log-socket\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713455 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fce41f1f-fd4c-42c0-b6ff-67410230a662-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713484 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-multus-certs\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713524 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fce41f1f-fd4c-42c0-b6ff-67410230a662-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713551 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-546vn\" (UniqueName: \"kubernetes.io/projected/fce41f1f-fd4c-42c0-b6ff-67410230a662-kube-api-access-546vn\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713582 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713604 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-cni-multus\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713625 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-netns\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713646 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-config\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713667 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-env-overrides\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713688 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-system-cni-dir\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713709 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713732 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sk7p\" (UniqueName: \"kubernetes.io/projected/fa028723-a519-4f82-860c-4c149f3a4e4a-kube-api-access-9sk7p\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713760 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-var-lib-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710499 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710559 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710642 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710955 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.714134 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711004 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711063 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711384 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711477 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711666 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.714277 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711949 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.711971 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.710029 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712031 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712170 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.714328 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712322 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712396 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.712677 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713007 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713026 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713104 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713500 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713688 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.713767 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.714682 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.714958 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.714986 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.715327 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.715480 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.715692 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.715790 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.715851 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.716065 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.715283 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.716611 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.716728 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.717264 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.717404 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.717742 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.717828 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.717955 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.718000 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.718022 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.718039 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.717855 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.718270 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.718290 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.718215 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.718406 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.719538 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.719561 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.719700 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.720150 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.720518 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.720534 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.720621 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-script-lib\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.720670 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.720718 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.720741 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721110 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-kubelet\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721142 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4tbf\" (UniqueName: \"kubernetes.io/projected/fa0fc000-74cb-4d5d-91b7-73d004abc007-kube-api-access-d4tbf\") pod \"node-resolver-5xkjw\" (UID: \"fa0fc000-74cb-4d5d-91b7-73d004abc007\") " pod="openshift-dns/node-resolver-5xkjw" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721169 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fce41f1f-fd4c-42c0-b6ff-67410230a662-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721191 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-system-cni-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721211 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-netns\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721235 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9aa07f40-f2db-461a-871b-85f3693e9069-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721256 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa028723-a519-4f82-860c-4c149f3a4e4a-rootfs\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721276 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-hostroot\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721299 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-conf-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721316 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-etc-kubernetes\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721335 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gf6\" (UniqueName: \"kubernetes.io/projected/a301620b-657c-46c0-a1a4-f7774e38f273-kube-api-access-l2gf6\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721355 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5nhv\" (UniqueName: \"kubernetes.io/projected/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-kube-api-access-s5nhv\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721364 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721434 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-cni-bin\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721466 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721488 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721508 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-cni-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721530 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa07f40-f2db-461a-871b-85f3693e9069-cni-binary-copy\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721552 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721569 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-k8s-cni-cncf-io\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721677 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa028723-a519-4f82-860c-4c149f3a4e4a-proxy-tls\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721702 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.721726 5029 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721783 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721817 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w2gz\" (UniqueName: \"kubernetes.io/projected/08946f02-ffb6-404b-b25c-6c261e8c2633-kube-api-access-9w2gz\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721866 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae27301f-09d6-4818-8896-d53499075139-host\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721895 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae27301f-09d6-4818-8896-d53499075139-serviceca\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721926 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.721981 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-node-log\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722007 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722011 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722033 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-daemon-config\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722060 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722090 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjhj\" (UniqueName: \"kubernetes.io/projected/ae27301f-09d6-4818-8896-d53499075139-kube-api-access-kvjhj\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722119 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722124 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722112 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-systemd\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722203 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722228 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-os-release\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.722236 5029 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.722338 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:02.222317687 +0000 UTC m=+102.238400090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722342 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722363 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-systemd-units\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722403 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-slash\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722425 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-ovn\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.722456 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:02.22243425 +0000 UTC m=+102.238516643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722795 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.722984 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.723073 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.723117 5029 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.724089 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.725902 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.726665 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.727087 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.727198 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.730925 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731083 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa028723-a519-4f82-860c-4c149f3a4e4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731137 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731169 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovn-node-metrics-cert\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731202 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08946f02-ffb6-404b-b25c-6c261e8c2633-cni-binary-copy\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731230 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-socket-dir-parent\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731261 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0fc000-74cb-4d5d-91b7-73d004abc007-hosts-file\") pod \"node-resolver-5xkjw\" (UID: \"fa0fc000-74cb-4d5d-91b7-73d004abc007\") " pod="openshift-dns/node-resolver-5xkjw" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731292 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731319 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-kubelet\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731339 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-bin\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731528 5029 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731549 5029 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731563 5029 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731583 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731606 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731619 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731639 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731653 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731667 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731683 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731712 5029 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731731 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731745 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731757 5029 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731768 5029 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731782 5029 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731796 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731809 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731819 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731830 5029 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731839 5029 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731856 5029 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731884 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731899 5029 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731911 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731922 5029 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731932 5029 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731943 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731953 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731963 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731973 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731983 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.731995 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732005 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732015 5029 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732004 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732024 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732088 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732147 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732161 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732172 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732183 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732193 5029 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732203 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732213 5029 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732222 5029 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732233 5029 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732246 5029 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732256 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732266 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732277 5029 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732287 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732296 5029 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732305 5029 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732316 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732325 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732334 5029 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732344 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732353 5029 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732363 5029 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732373 5029 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732382 5029 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732392 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732403 5029 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732414 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732423 5029 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732432 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732442 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732452 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732462 5029 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732471 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732480 5029 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732489 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732498 5029 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732507 5029 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732517 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732526 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732536 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732545 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732554 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732563 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732572 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732582 5029 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732590 5029 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732599 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732598 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732608 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732686 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732697 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732705 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732715 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732725 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732738 5029 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732749 5029 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732761 5029 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732776 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732788 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732815 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732828 5029 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732839 5029 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732751 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732885 5029 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732903 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732917 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732930 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732940 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.732991 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733001 5029 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733010 5029 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733024 5029 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733032 5029 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733041 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733050 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733059 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733067 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733077 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733087 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733099 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733110 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733122 5029 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733131 5029 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733141 5029 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733150 5029 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733159 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733172 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733181 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733190 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733199 5029 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733210 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733221 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733230 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733239 5029 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.733248 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.735710 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.735728 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.735741 5029 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.735797 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:02.235778581 +0000 UTC m=+102.251860984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.736544 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.736794 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.737228 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.737258 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.737268 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.737282 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.737291 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.738023 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.738033 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.738273 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.738367 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.738436 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.739664 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.740104 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.748120 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.748334 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.748358 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.748542 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.748958 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.749800 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.750103 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.750432 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.750450 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.750494 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.750537 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.750588 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.750644 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.753599 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.753710 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.753776 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.753792 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.754051 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.754081 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.754060 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.754153 5029 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.754224 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:02.254186567 +0000 UTC m=+102.270268970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.754328 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755009 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755268 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755272 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755352 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755469 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755284 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755569 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755685 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.755759 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.756016 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.756470 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.756663 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.757046 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.757283 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.757900 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.758269 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.758345 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.758398 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.758427 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.758465 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.758665 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.759458 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.759808 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.760188 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.760738 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.760793 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.760983 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.761137 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.761986 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.762098 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.762127 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.762573 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.762755 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.762826 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.762995 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.770263 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.776007 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.781327 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.783405 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.790929 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.793300 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.795426 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.812305 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.824746 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.834076 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-socket-dir-parent\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.834415 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0fc000-74cb-4d5d-91b7-73d004abc007-hosts-file\") pod \"node-resolver-5xkjw\" (UID: \"fa0fc000-74cb-4d5d-91b7-73d004abc007\") " pod="openshift-dns/node-resolver-5xkjw" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.834552 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.834641 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0fc000-74cb-4d5d-91b7-73d004abc007-hosts-file\") pod \"node-resolver-5xkjw\" (UID: \"fa0fc000-74cb-4d5d-91b7-73d004abc007\") " pod="openshift-dns/node-resolver-5xkjw" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.834466 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-socket-dir-parent\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.834747 5029 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: E0313 20:29:01.834992 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs podName:a301620b-657c-46c0-a1a4-f7774e38f273 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:02.334974091 +0000 UTC m=+102.351056494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs") pod "network-metrics-daemon-frlln" (UID: "a301620b-657c-46c0-a1a4-f7774e38f273") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.834880 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-kubelet\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.834813 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-kubelet\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835230 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-bin\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835310 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovn-node-metrics-cert\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835336 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-bin\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835381 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08946f02-ffb6-404b-b25c-6c261e8c2633-cni-binary-copy\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835507 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-cnibin\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835534 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-os-release\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835561 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-etc-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835579 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835611 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-netd\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835634 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835643 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-os-release\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835643 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-etc-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835676 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-cnibin\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835732 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835764 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-cnibin\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835760 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835925 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-netd\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835936 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmv8\" (UniqueName: \"kubernetes.io/projected/9aa07f40-f2db-461a-871b-85f3693e9069-kube-api-access-dfmv8\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835979 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-log-socket\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.835991 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-cnibin\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836021 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-log-socket\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836069 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fce41f1f-fd4c-42c0-b6ff-67410230a662-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836087 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-multus-certs\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836120 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fce41f1f-fd4c-42c0-b6ff-67410230a662-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836140 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-546vn\" (UniqueName: \"kubernetes.io/projected/fce41f1f-fd4c-42c0-b6ff-67410230a662-kube-api-access-546vn\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836173 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-cni-multus\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836174 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-multus-certs\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836250 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-cni-multus\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836189 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-env-overrides\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836290 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-system-cni-dir\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836313 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836332 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sk7p\" (UniqueName: \"kubernetes.io/projected/fa028723-a519-4f82-860c-4c149f3a4e4a-kube-api-access-9sk7p\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836381 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-netns\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836409 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-config\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836431 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-var-lib-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836450 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-script-lib\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836466 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-kubelet\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836481 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-system-cni-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836501 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-netns\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836500 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-system-cni-dir\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836524 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9aa07f40-f2db-461a-871b-85f3693e9069-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836642 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa028723-a519-4f82-860c-4c149f3a4e4a-rootfs\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836682 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4tbf\" (UniqueName: \"kubernetes.io/projected/fa0fc000-74cb-4d5d-91b7-73d004abc007-kube-api-access-d4tbf\") pod \"node-resolver-5xkjw\" (UID: \"fa0fc000-74cb-4d5d-91b7-73d004abc007\") " pod="openshift-dns/node-resolver-5xkjw" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836703 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fce41f1f-fd4c-42c0-b6ff-67410230a662-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836724 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-etc-kubernetes\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836749 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2gf6\" (UniqueName: \"kubernetes.io/projected/a301620b-657c-46c0-a1a4-f7774e38f273-kube-api-access-l2gf6\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836767 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5nhv\" (UniqueName: \"kubernetes.io/projected/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-kube-api-access-s5nhv\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836786 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-cni-bin\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836788 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fce41f1f-fd4c-42c0-b6ff-67410230a662-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836803 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-hostroot\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836825 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-conf-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836848 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-cni-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836878 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-etc-kubernetes\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.836890 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa07f40-f2db-461a-871b-85f3693e9069-cni-binary-copy\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.837072 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.837095 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.837124 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-env-overrides\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.837268 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9aa07f40-f2db-461a-871b-85f3693e9069-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.837606 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa07f40-f2db-461a-871b-85f3693e9069-cni-binary-copy\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.837664 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa028723-a519-4f82-860c-4c149f3a4e4a-proxy-tls\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.837713 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-var-lib-openvswitch\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.837729 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-kubelet\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838064 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-hostroot\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838126 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838153 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-system-cni-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838243 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-cni-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838232 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa07f40-f2db-461a-871b-85f3693e9069-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838588 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-script-lib\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838642 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-conf-dir\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838726 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-k8s-cni-cncf-io\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.838127 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-netns\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.839597 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.839636 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa028723-a519-4f82-860c-4c149f3a4e4a-rootfs\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.839665 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-netns\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.839691 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-var-lib-cni-bin\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.839944 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.839970 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.839982 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.839997 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.840009 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.840917 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w2gz\" (UniqueName: \"kubernetes.io/projected/08946f02-ffb6-404b-b25c-6c261e8c2633-kube-api-access-9w2gz\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.840959 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae27301f-09d6-4818-8896-d53499075139-host\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.841008 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae27301f-09d6-4818-8896-d53499075139-serviceca\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.841078 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-daemon-config\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.841121 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.841166 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvjhj\" (UniqueName: \"kubernetes.io/projected/ae27301f-09d6-4818-8896-d53499075139-kube-api-access-kvjhj\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.841187 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-systemd\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.847577 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-node-log\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.847703 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-systemd-units\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.847815 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-slash\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.847974 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-ovn\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848138 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-os-release\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848246 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa028723-a519-4f82-860c-4c149f3a4e4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848401 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848480 5029 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848562 5029 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848663 5029 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.841904 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/08946f02-ffb6-404b-b25c-6c261e8c2633-multus-daemon-config\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848783 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-os-release\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848855 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848909 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848929 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848949 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848967 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848984 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849002 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849018 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849030 5029 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849047 5029 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849042 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-slash\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.842023 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-systemd\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849134 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-ovn\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.842780 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae27301f-09d6-4818-8896-d53499075139-serviceca\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.841926 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae27301f-09d6-4818-8896-d53499075139-host\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.843173 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovn-node-metrics-cert\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.840510 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/08946f02-ffb6-404b-b25c-6c261e8c2633-host-run-k8s-cni-cncf-io\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849216 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-systemd-units\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849063 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849334 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849354 5029 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849370 5029 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849384 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849397 5029 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849410 5029 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849424 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849436 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849449 5029 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849460 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849471 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849481 5029 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849493 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849517 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849528 5029 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849541 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849552 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849565 5029 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849576 5029 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849586 5029 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849596 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849605 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849617 5029 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849627 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849638 5029 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849648 5029 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849660 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849669 5029 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849680 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849692 5029 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849703 5029 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849714 5029 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849726 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849738 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849749 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849759 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849773 5029 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849783 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849794 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849811 5029 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849823 5029 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849833 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849843 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849859 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849887 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849897 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849896 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa028723-a519-4f82-860c-4c149f3a4e4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849908 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849962 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.849979 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.841682 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.848544 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa028723-a519-4f82-860c-4c149f3a4e4a-proxy-tls\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.847046 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fce41f1f-fd4c-42c0-b6ff-67410230a662-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.850042 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-node-log\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.851632 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fce41f1f-fd4c-42c0-b6ff-67410230a662-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.852013 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08946f02-ffb6-404b-b25c-6c261e8c2633-cni-binary-copy\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.854417 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmv8\" (UniqueName: \"kubernetes.io/projected/9aa07f40-f2db-461a-871b-85f3693e9069-kube-api-access-dfmv8\") pod \"multus-additional-cni-plugins-zrq2k\" (UID: \"9aa07f40-f2db-461a-871b-85f3693e9069\") " pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.854420 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-config\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.857125 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5nhv\" (UniqueName: \"kubernetes.io/projected/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-kube-api-access-s5nhv\") pod \"ovnkube-node-v2xrv\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.858580 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w2gz\" (UniqueName: \"kubernetes.io/projected/08946f02-ffb6-404b-b25c-6c261e8c2633-kube-api-access-9w2gz\") pod \"multus-2thxr\" (UID: \"08946f02-ffb6-404b-b25c-6c261e8c2633\") " pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.859154 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sk7p\" (UniqueName: \"kubernetes.io/projected/fa028723-a519-4f82-860c-4c149f3a4e4a-kube-api-access-9sk7p\") pod \"machine-config-daemon-28st2\" (UID: \"fa028723-a519-4f82-860c-4c149f3a4e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.859749 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2gf6\" (UniqueName: \"kubernetes.io/projected/a301620b-657c-46c0-a1a4-f7774e38f273-kube-api-access-l2gf6\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.865274 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvjhj\" (UniqueName: \"kubernetes.io/projected/ae27301f-09d6-4818-8896-d53499075139-kube-api-access-kvjhj\") pod \"node-ca-jflsf\" (UID: \"ae27301f-09d6-4818-8896-d53499075139\") " pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.866647 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-546vn\" (UniqueName: \"kubernetes.io/projected/fce41f1f-fd4c-42c0-b6ff-67410230a662-kube-api-access-546vn\") pod \"ovnkube-control-plane-749d76644c-z2p2c\" (UID: \"fce41f1f-fd4c-42c0-b6ff-67410230a662\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.866767 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4tbf\" (UniqueName: \"kubernetes.io/projected/fa0fc000-74cb-4d5d-91b7-73d004abc007-kube-api-access-d4tbf\") pod \"node-resolver-5xkjw\" (UID: \"fa0fc000-74cb-4d5d-91b7-73d004abc007\") " pod="openshift-dns/node-resolver-5xkjw" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.892509 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.901778 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.910793 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 13 20:29:01 crc kubenswrapper[5029]: W0313 20:29:01.917117 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c70b0275b2d4a928889fdbe315c902f7dbd2285617a2295f83b41aa0aacd29ff WatchSource:0}: Error finding container c70b0275b2d4a928889fdbe315c902f7dbd2285617a2295f83b41aa0aacd29ff: Status 404 returned error can't find the container with id c70b0275b2d4a928889fdbe315c902f7dbd2285617a2295f83b41aa0aacd29ff Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.917747 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jflsf" Mar 13 20:29:01 crc kubenswrapper[5029]: W0313 20:29:01.925411 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-2279f4ba5efc486944dfd11012646c2d73b63e92dcfdd28619f1134be09b9548 WatchSource:0}: Error finding container 2279f4ba5efc486944dfd11012646c2d73b63e92dcfdd28619f1134be09b9548: Status 404 returned error can't find the container with id 2279f4ba5efc486944dfd11012646c2d73b63e92dcfdd28619f1134be09b9548 Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.925658 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2thxr" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.934714 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.942644 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.944571 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.944608 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.944618 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.944632 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.944644 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:01Z","lastTransitionTime":"2026-03-13T20:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.948772 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:29:01 crc kubenswrapper[5029]: W0313 20:29:01.954562 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08946f02_ffb6_404b_b25c_6c261e8c2633.slice/crio-713372f534f046cc7f6842d87adcb4432b3c2182486970b084b68d959fb4bc0e WatchSource:0}: Error finding container 713372f534f046cc7f6842d87adcb4432b3c2182486970b084b68d959fb4bc0e: Status 404 returned error can't find the container with id 713372f534f046cc7f6842d87adcb4432b3c2182486970b084b68d959fb4bc0e Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.956893 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5xkjw" Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.961579 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:01 crc kubenswrapper[5029]: W0313 20:29:01.975797 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aa07f40_f2db_461a_871b_85f3693e9069.slice/crio-378940f6dba43e9908adeb5bcdfe54d8fb21e13f88e5f9aa88227e083482d438 WatchSource:0}: Error finding container 378940f6dba43e9908adeb5bcdfe54d8fb21e13f88e5f9aa88227e083482d438: Status 404 returned error can't find the container with id 378940f6dba43e9908adeb5bcdfe54d8fb21e13f88e5f9aa88227e083482d438 Mar 13 20:29:01 crc kubenswrapper[5029]: W0313 20:29:01.976747 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfce41f1f_fd4c_42c0_b6ff_67410230a662.slice/crio-b002c6a5230cb5f1b52107cf9cd6f0782aa81f601023719c91a232a1f6154f1f WatchSource:0}: Error finding container b002c6a5230cb5f1b52107cf9cd6f0782aa81f601023719c91a232a1f6154f1f: Status 404 returned error can't find the container with id b002c6a5230cb5f1b52107cf9cd6f0782aa81f601023719c91a232a1f6154f1f Mar 13 20:29:01 crc kubenswrapper[5029]: I0313 20:29:01.999661 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jflsf" event={"ID":"ae27301f-09d6-4818-8896-d53499075139","Type":"ContainerStarted","Data":"118c8d783cae2332660cd645fe0b351d5af6979ce3e2d5664a4dc3dd49711a43"} Mar 13 20:29:02 crc kubenswrapper[5029]: W0313 20:29:02.000743 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa028723_a519_4f82_860c_4c149f3a4e4a.slice/crio-e12e86221d4d7e491266cd40cbc8bdfa7ff39498fe532cd01aa3c85e9197943f WatchSource:0}: Error finding container e12e86221d4d7e491266cd40cbc8bdfa7ff39498fe532cd01aa3c85e9197943f: Status 404 returned error can't find the container with id e12e86221d4d7e491266cd40cbc8bdfa7ff39498fe532cd01aa3c85e9197943f Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.001023 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" event={"ID":"9aa07f40-f2db-461a-871b-85f3693e9069","Type":"ContainerStarted","Data":"378940f6dba43e9908adeb5bcdfe54d8fb21e13f88e5f9aa88227e083482d438"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.001804 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2279f4ba5efc486944dfd11012646c2d73b63e92dcfdd28619f1134be09b9548"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.002995 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"69db9e54070ec9b00876849033bef012992c430fa7c368181e9bb2c66d7ea53a"} Mar 13 20:29:02 crc kubenswrapper[5029]: W0313 20:29:02.003216 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded9df53f_1a1d_4cbc_997a_79dbe299d2b6.slice/crio-6e515bbf26769a7aaf362f12e9fc8f01a21092122949f8788413ba54bd17ba2c WatchSource:0}: Error finding container 6e515bbf26769a7aaf362f12e9fc8f01a21092122949f8788413ba54bd17ba2c: Status 404 returned error can't find the container with id 6e515bbf26769a7aaf362f12e9fc8f01a21092122949f8788413ba54bd17ba2c Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.004643 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c70b0275b2d4a928889fdbe315c902f7dbd2285617a2295f83b41aa0aacd29ff"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.005580 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" event={"ID":"fce41f1f-fd4c-42c0-b6ff-67410230a662","Type":"ContainerStarted","Data":"b002c6a5230cb5f1b52107cf9cd6f0782aa81f601023719c91a232a1f6154f1f"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.011784 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2thxr" event={"ID":"08946f02-ffb6-404b-b25c-6c261e8c2633","Type":"ContainerStarted","Data":"713372f534f046cc7f6842d87adcb4432b3c2182486970b084b68d959fb4bc0e"} Mar 13 20:29:02 crc kubenswrapper[5029]: W0313 20:29:02.033085 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa0fc000_74cb_4d5d_91b7_73d004abc007.slice/crio-7d725ae9cd743b57beaeb81bd288d7d76e04e02144eb1d647927924af0a4efcc WatchSource:0}: Error finding container 7d725ae9cd743b57beaeb81bd288d7d76e04e02144eb1d647927924af0a4efcc: Status 404 returned error can't find the container with id 7d725ae9cd743b57beaeb81bd288d7d76e04e02144eb1d647927924af0a4efcc Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.047889 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.047922 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.047930 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.047947 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.047956 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.149798 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.149834 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.149845 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.149883 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.149893 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.252797 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.253393 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.253411 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.253436 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.253451 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.254522 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.254683 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.254722 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.254749 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.254789 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.254984 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255008 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255021 5029 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255071 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:03.255057288 +0000 UTC m=+103.271139691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255394 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:29:03.255385537 +0000 UTC m=+103.271467940 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255448 5029 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255479 5029 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255492 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:03.25548012 +0000 UTC m=+103.271562513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255452 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255505 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:03.255499041 +0000 UTC m=+103.271581444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255518 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255530 5029 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.255567 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:03.255557243 +0000 UTC m=+103.271639646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.355172 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.355392 5029 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: E0313 20:29:02.355512 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs podName:a301620b-657c-46c0-a1a4-f7774e38f273 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:03.355486023 +0000 UTC m=+103.371568426 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs") pod "network-metrics-daemon-frlln" (UID: "a301620b-657c-46c0-a1a4-f7774e38f273") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.356571 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.356630 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.356642 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.356668 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.356688 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.459709 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.459764 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.459776 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.459796 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.459808 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.563726 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.563773 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.563786 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.563807 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.563818 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.605276 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.606790 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.609247 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.610685 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.613518 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.614639 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.616150 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.618259 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.619728 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.620592 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.621546 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.622611 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.624295 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.624986 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.625644 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.626251 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.626943 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.627502 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.628180 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.630156 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.630719 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.631427 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.632365 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.633056 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.634022 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.634671 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.636081 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.636613 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.637679 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.638197 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.638866 5029 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.638999 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.641661 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.642312 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.643413 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.645024 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.645814 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.646436 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.647296 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.648231 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.648750 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.649389 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.650017 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.650636 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.651151 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.651842 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.652455 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.653328 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.653987 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.654473 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.656613 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.657381 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.658545 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.659138 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.668929 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.668972 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.668982 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.668998 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.669009 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.772061 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.772324 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.772405 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.772510 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.772585 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.875126 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.875163 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.875173 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.875188 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.875200 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.978466 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.978515 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.978525 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.978541 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:02 crc kubenswrapper[5029]: I0313 20:29:02.978551 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:02Z","lastTransitionTime":"2026-03-13T20:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.028542 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jflsf" event={"ID":"ae27301f-09d6-4818-8896-d53499075139","Type":"ContainerStarted","Data":"ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.030920 5029 generic.go:334] "Generic (PLEG): container finished" podID="9aa07f40-f2db-461a-871b-85f3693e9069" containerID="68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8" exitCode=0 Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.031035 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" event={"ID":"9aa07f40-f2db-461a-871b-85f3693e9069","Type":"ContainerDied","Data":"68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.033302 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.033348 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.035968 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0" exitCode=0 Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.036063 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.036129 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"6e515bbf26769a7aaf362f12e9fc8f01a21092122949f8788413ba54bd17ba2c"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.040459 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.040528 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.040546 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"e12e86221d4d7e491266cd40cbc8bdfa7ff39498fe532cd01aa3c85e9197943f"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.042414 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2thxr" event={"ID":"08946f02-ffb6-404b-b25c-6c261e8c2633","Type":"ContainerStarted","Data":"8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.045426 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.045487 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.049391 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" event={"ID":"fce41f1f-fd4c-42c0-b6ff-67410230a662","Type":"ContainerStarted","Data":"2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.049446 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" event={"ID":"fce41f1f-fd4c-42c0-b6ff-67410230a662","Type":"ContainerStarted","Data":"7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.051139 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5xkjw" event={"ID":"fa0fc000-74cb-4d5d-91b7-73d004abc007","Type":"ContainerStarted","Data":"81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.051178 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5xkjw" event={"ID":"fa0fc000-74cb-4d5d-91b7-73d004abc007","Type":"ContainerStarted","Data":"7d725ae9cd743b57beaeb81bd288d7d76e04e02144eb1d647927924af0a4efcc"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.061526 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.084434 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.084486 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.084501 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.084523 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.084537 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.085397 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.097264 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.111559 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.136982 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.154224 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.170366 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.182113 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.200016 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.200068 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.200081 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.200103 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.200120 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.201670 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.217504 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.239802 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.254979 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.267160 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.267460 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.267545 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.267636 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.267748 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.267989 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268085 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268173 5029 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268298 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268323 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268335 5029 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268412 5029 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268065 5029 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.267957 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268133 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:29:05.268118304 +0000 UTC m=+105.284200707 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268606 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:05.268596507 +0000 UTC m=+105.284678910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268618 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:05.268612618 +0000 UTC m=+105.284695021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268628 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:05.268623308 +0000 UTC m=+105.284705711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.268638 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:05.268633798 +0000 UTC m=+105.284716201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.284812 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.306320 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.306501 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.306619 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.306720 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.306835 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.309123 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.335014 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.353820 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.366109 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.368640 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.368949 5029 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.369082 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs podName:a301620b-657c-46c0-a1a4-f7774e38f273 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:05.369054594 +0000 UTC m=+105.385137197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs") pod "network-metrics-daemon-frlln" (UID: "a301620b-657c-46c0-a1a4-f7774e38f273") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.381949 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.401015 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.409780 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.409826 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.409837 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.409857 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.409880 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.420648 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.437448 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.450490 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.475120 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.491694 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.504177 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.513512 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.513575 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.513589 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.513610 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.513623 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.517241 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:03Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.599055 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.599139 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.599194 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.599050 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.599159 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.599397 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.599569 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:03 crc kubenswrapper[5029]: E0313 20:29:03.599661 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.616320 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.616464 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.616487 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.616514 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.616563 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.720015 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.720056 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.720067 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.720085 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.720100 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.823154 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.823198 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.823208 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.823224 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.823236 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.925782 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.925839 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.925877 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.925897 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:03 crc kubenswrapper[5029]: I0313 20:29:03.925908 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:03Z","lastTransitionTime":"2026-03-13T20:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.029921 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.030626 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.030678 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.030712 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.030733 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.057005 5029 generic.go:334] "Generic (PLEG): container finished" podID="9aa07f40-f2db-461a-871b-85f3693e9069" containerID="6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2" exitCode=0 Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.057116 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" event={"ID":"9aa07f40-f2db-461a-871b-85f3693e9069","Type":"ContainerDied","Data":"6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.068959 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.069015 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.069031 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.079911 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.101056 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.118084 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.135673 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.135761 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.135779 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.135802 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.135834 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.136535 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.149877 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.164813 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.178709 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.213054 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.240573 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.240611 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.240621 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.240636 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.240645 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.259258 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.284416 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.311300 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.325849 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.341012 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.343483 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.343512 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.343526 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.343545 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.343559 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.364793 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:04Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.446710 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.446942 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.446953 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.446970 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.446999 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.550569 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.550612 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.550642 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.550663 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.550676 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.654280 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.654324 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.654335 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.654354 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.654366 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.757383 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.757447 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.757466 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.757493 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.757515 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.861145 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.861202 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.861212 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.861233 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.861247 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.964803 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.964906 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.964926 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.964958 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:04 crc kubenswrapper[5029]: I0313 20:29:04.964978 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:04Z","lastTransitionTime":"2026-03-13T20:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.068213 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.068270 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.068281 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.068300 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.068312 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.075315 5029 generic.go:334] "Generic (PLEG): container finished" podID="9aa07f40-f2db-461a-871b-85f3693e9069" containerID="b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60" exitCode=0 Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.075408 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" event={"ID":"9aa07f40-f2db-461a-871b-85f3693e9069","Type":"ContainerDied","Data":"b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.077058 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.082480 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.082522 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.082534 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.097451 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.115752 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.129707 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.147479 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.164943 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.171608 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.171656 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.171685 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.171731 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.171742 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.180728 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.195128 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.211504 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.224877 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.238425 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.254644 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.270337 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.276113 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.276148 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.276159 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.276178 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.276194 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.283405 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.294620 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.297188 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.297387 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.297447 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.297473 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.297513 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297619 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:29:09.2975719 +0000 UTC m=+109.313654363 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297666 5029 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297711 5029 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297741 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297772 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297702 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297793 5029 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297803 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297799 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:09.297770476 +0000 UTC m=+109.313853049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297816 5029 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297834 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:09.297824108 +0000 UTC m=+109.313906741 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297877 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:09.297848819 +0000 UTC m=+109.313931452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.297901 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:09.29788898 +0000 UTC m=+109.313971623 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.319439 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.333597 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.347734 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.361100 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.380330 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.380368 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.380609 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.380640 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.380672 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.380713 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.398978 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.399209 5029 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.399332 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs podName:a301620b-657c-46c0-a1a4-f7774e38f273 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:09.399302073 +0000 UTC m=+109.415384476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs") pod "network-metrics-daemon-frlln" (UID: "a301620b-657c-46c0-a1a4-f7774e38f273") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.400597 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.420978 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.435647 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.447622 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.460791 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.477178 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.483792 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.483832 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.483843 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.483881 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.483899 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.490959 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.505662 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.523827 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:05Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.586876 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.586948 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.586959 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.586980 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.586993 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.598660 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.598695 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.598661 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.598660 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.598789 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.598909 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.599193 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:05 crc kubenswrapper[5029]: E0313 20:29:05.599249 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.690343 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.690409 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.690420 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.690442 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.690456 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.793970 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.794034 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.794050 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.794086 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.794109 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.896699 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.896756 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.896771 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.896790 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.896806 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.999074 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.999150 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.999168 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.999200 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:05 crc kubenswrapper[5029]: I0313 20:29:05.999219 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:05Z","lastTransitionTime":"2026-03-13T20:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.091424 5029 generic.go:334] "Generic (PLEG): container finished" podID="9aa07f40-f2db-461a-871b-85f3693e9069" containerID="929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304" exitCode=0 Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.091564 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" event={"ID":"9aa07f40-f2db-461a-871b-85f3693e9069","Type":"ContainerDied","Data":"929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.101683 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.101820 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.101842 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.101923 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.101941 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.118652 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.134787 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.152710 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.168890 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.182694 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.200263 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.204953 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.204991 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.205001 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.205016 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.205026 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.217710 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.230225 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.246628 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.263142 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.280078 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.293971 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.308432 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.308479 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.308489 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.308507 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.308516 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.308643 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.327690 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:06Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.411602 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.411640 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.411651 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.411669 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.411681 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.513838 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.513906 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.513916 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.513956 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.513967 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.616394 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.616427 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.616435 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.616447 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.616457 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.718257 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.718287 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.718294 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.718309 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.718317 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.821731 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.821798 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.821831 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.821879 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.821895 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.926353 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.926419 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.926435 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.926459 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:06 crc kubenswrapper[5029]: I0313 20:29:06.926474 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:06Z","lastTransitionTime":"2026-03-13T20:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.028666 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.028725 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.028760 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.028775 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.028786 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.102923 5029 generic.go:334] "Generic (PLEG): container finished" podID="9aa07f40-f2db-461a-871b-85f3693e9069" containerID="6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85" exitCode=0 Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.103042 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" event={"ID":"9aa07f40-f2db-461a-871b-85f3693e9069","Type":"ContainerDied","Data":"6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.110223 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.119961 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.132063 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.132124 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.132135 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.132152 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.132163 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.135766 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.155487 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.181443 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.196087 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.210891 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.229073 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.242550 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.242589 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.242603 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.242624 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.242636 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.243673 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.258452 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.271411 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.285286 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.301265 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.314299 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.337497 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:07Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.345530 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.345581 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.345592 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.345616 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.345643 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.449726 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.449807 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.449828 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.449892 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.449921 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.554622 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.555188 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.555201 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.555222 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.555235 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.598883 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.598894 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.599017 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.599174 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:07 crc kubenswrapper[5029]: E0313 20:29:07.599280 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:07 crc kubenswrapper[5029]: E0313 20:29:07.599435 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:07 crc kubenswrapper[5029]: E0313 20:29:07.599550 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:07 crc kubenswrapper[5029]: E0313 20:29:07.599756 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.658189 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.658369 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.658386 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.658406 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.658422 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.760909 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.760948 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.760955 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.760971 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.760980 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.863354 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.863386 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.863405 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.863421 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.863432 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.966379 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.966414 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.966435 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.966453 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:07 crc kubenswrapper[5029]: I0313 20:29:07.966465 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:07Z","lastTransitionTime":"2026-03-13T20:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.069235 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.069281 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.069292 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.069311 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.069323 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.117082 5029 generic.go:334] "Generic (PLEG): container finished" podID="9aa07f40-f2db-461a-871b-85f3693e9069" containerID="631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e" exitCode=0 Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.117129 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" event={"ID":"9aa07f40-f2db-461a-871b-85f3693e9069","Type":"ContainerDied","Data":"631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.132424 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.149325 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.162065 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.173366 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.173402 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.173413 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.173428 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.173438 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.175469 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.190997 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.210890 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.226091 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.239573 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.254910 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.270121 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.277481 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.277535 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.277551 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.277573 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.277589 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.287291 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.307102 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.325290 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.339300 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:08Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.381075 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.381114 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.381122 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.381138 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.381148 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.484063 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.484114 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.484125 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.484145 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.484157 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.587548 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.587591 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.587608 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.587630 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.587643 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.691610 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.691657 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.691670 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.691693 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.691704 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.797837 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.798465 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.798517 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.798546 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.798593 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.901179 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.901220 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.901230 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.901245 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:08 crc kubenswrapper[5029]: I0313 20:29:08.901256 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:08Z","lastTransitionTime":"2026-03-13T20:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.004358 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.004490 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.004564 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.004632 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.004700 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.107944 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.107995 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.108008 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.108027 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.108039 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.128387 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.129036 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.137429 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" event={"ID":"9aa07f40-f2db-461a-871b-85f3693e9069","Type":"ContainerStarted","Data":"290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.151721 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.154909 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.165475 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.181502 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.196279 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.208963 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.210809 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.210877 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.210891 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.210907 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.210917 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.221615 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.234212 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.249258 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.259957 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.270136 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.281620 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.297756 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.313731 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.313805 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.313820 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.313845 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.313879 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.315293 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.327361 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.342486 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.348653 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.348865 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:29:17.348811425 +0000 UTC m=+117.364893828 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.349153 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.349236 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349305 5029 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349349 5029 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.349443 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349488 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:17.349475254 +0000 UTC m=+117.365557657 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349568 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349611 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349628 5029 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349718 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:17.349581167 +0000 UTC m=+117.365663570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.349837 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349914 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:17.349876565 +0000 UTC m=+117.365959018 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.349992 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.350136 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.350195 5029 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.350285 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:17.350276676 +0000 UTC m=+117.366359079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.356912 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.370597 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.383566 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.399878 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.417157 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.417201 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.417391 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.417415 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.417434 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.418279 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.434097 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.446923 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.450755 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.450920 5029 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.450992 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs podName:a301620b-657c-46c0-a1a4-f7774e38f273 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:17.45097408 +0000 UTC m=+117.467056483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs") pod "network-metrics-daemon-frlln" (UID: "a301620b-657c-46c0-a1a4-f7774e38f273") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.460527 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.473496 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.487761 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.500651 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.514743 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.520111 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.520147 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.520158 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.520173 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.520184 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.541931 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:09Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.598536 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.598594 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.598563 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.598547 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.598669 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.598747 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.598824 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:09 crc kubenswrapper[5029]: E0313 20:29:09.598906 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.622628 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.622671 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.622682 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.622702 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.622714 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.724824 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.724878 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.724888 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.724901 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.724909 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.826610 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.826660 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.826671 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.826686 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.826720 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.929340 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.929378 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.929387 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.929400 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:09 crc kubenswrapper[5029]: I0313 20:29:09.929412 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:09Z","lastTransitionTime":"2026-03-13T20:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.033125 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.033192 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.033211 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.033246 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.033284 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.135246 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.135283 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.135292 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.135308 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.135317 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.140721 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.140772 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.171431 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.191406 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.205427 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.221566 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.236081 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.237744 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.237809 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.237822 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.237838 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.237900 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.248595 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.263037 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.275879 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.285728 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.296659 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.312785 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.336252 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.340797 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.340829 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.340837 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.340849 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.340877 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.350390 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.360423 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.370913 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.442739 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.442783 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.442793 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.442811 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.442822 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.544729 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.544770 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.544779 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.544795 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.544807 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.624705 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.644144 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.646773 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.646813 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.646828 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.646859 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.646870 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.656936 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.675649 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.693258 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.711472 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.726795 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.741815 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.750227 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.750250 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.750259 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.750271 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.750281 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.753544 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.766748 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.777981 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.789875 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.800793 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.810296 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.852775 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.852807 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.852818 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.852833 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.852844 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.955056 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.955118 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.955128 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.955144 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:10 crc kubenswrapper[5029]: I0313 20:29:10.955156 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:10Z","lastTransitionTime":"2026-03-13T20:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.058054 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.058091 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.058102 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.058120 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.058132 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.160927 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.161006 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.161021 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.161050 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.161066 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.264032 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.264070 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.264081 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.264099 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.264110 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.367098 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.367124 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.367132 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.367148 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.367156 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.376336 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.376380 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.376395 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.376414 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.376429 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.397580 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:11Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.401265 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.401301 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.401311 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.401327 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.401338 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.412809 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:11Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.416761 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.416816 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.416825 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.416840 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.416866 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.428353 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:11Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.432337 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.432365 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.432375 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.432389 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.432400 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.443773 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:11Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.447548 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.447591 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.447601 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.447615 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.447625 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.460586 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:11Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.460755 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.469826 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.469905 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.469917 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.469936 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.469950 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.573313 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.573360 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.573372 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.573388 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.573398 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.599247 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.599290 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.599266 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.599376 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.599446 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.599571 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.599819 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:11 crc kubenswrapper[5029]: E0313 20:29:11.600190 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.642377 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.642814 5029 scope.go:117] "RemoveContainer" containerID="fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.676289 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.676328 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.676342 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.676363 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.676377 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.778614 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.778643 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.778652 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.778667 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.778677 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.882787 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.883087 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.883098 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.883112 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.883122 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.984871 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.984903 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.984911 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.984926 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:11 crc kubenswrapper[5029]: I0313 20:29:11.984935 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:11Z","lastTransitionTime":"2026-03-13T20:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.087544 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.087586 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.087595 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.087610 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.087618 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.152032 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/0.log" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.154554 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d" exitCode=1 Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.154618 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.155295 5029 scope.go:117] "RemoveContainer" containerID="7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.156558 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.157968 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.158257 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.172234 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.185630 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.189335 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.189361 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.189371 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.189384 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.189392 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.199091 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.214419 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.230530 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.244994 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.261746 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.277498 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.288023 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.291226 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.291255 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.291265 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.291278 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.291288 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.299620 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.312202 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.324386 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.336118 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.346552 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.369527 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\" 6932 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 20:29:11.318598 6932 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0313 20:29:11.318688 6932 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 20:29:11.318736 6932 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 20:29:11.318807 6932 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0313 20:29:11.318829 6932 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0313 20:29:11.318962 6932 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0313 20:29:11.318995 6932 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0313 20:29:11.319036 6932 factory.go:656] Stopping watch factory\\\\nI0313 20:29:11.319072 6932 ovnkube.go:599] Stopped ovnkube\\\\nI0313 20:29:11.319146 6932 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0313 20:29:11.319177 6932 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 20:29:11.319202 6932 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0313 20:29:11.319231 6932 handler.go:208] Removed *v1.Node event handler 2\\\\nI0313 20:29:11.319268 6932 handler.go:208] Removed *v1.Node event handler 7\\\\nI0313 20:29:11.319293 6932 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0313 20\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.385226 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.394990 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.395044 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.395058 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.395076 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.395089 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.406914 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.420440 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.431888 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.442721 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.468485 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\" 6932 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 20:29:11.318598 6932 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0313 20:29:11.318688 6932 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 20:29:11.318736 6932 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 20:29:11.318807 6932 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0313 20:29:11.318829 6932 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0313 20:29:11.318962 6932 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0313 20:29:11.318995 6932 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0313 20:29:11.319036 6932 factory.go:656] Stopping watch factory\\\\nI0313 20:29:11.319072 6932 ovnkube.go:599] Stopped ovnkube\\\\nI0313 20:29:11.319146 6932 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0313 20:29:11.319177 6932 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 20:29:11.319202 6932 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0313 20:29:11.319231 6932 handler.go:208] Removed *v1.Node event handler 2\\\\nI0313 20:29:11.319268 6932 handler.go:208] Removed *v1.Node event handler 7\\\\nI0313 20:29:11.319293 6932 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0313 20\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.482882 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.496474 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.497517 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.497563 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.497576 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.497592 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.497602 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.513084 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.528637 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.544901 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.563995 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.585416 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.599716 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.600394 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.600450 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.600461 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.600476 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.600488 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.614515 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.703123 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.703170 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.703182 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.703197 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.703208 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.805788 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.805820 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.805829 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.805842 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.805868 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.908267 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.908317 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.908329 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.908349 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:12 crc kubenswrapper[5029]: I0313 20:29:12.908359 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:12Z","lastTransitionTime":"2026-03-13T20:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.011444 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.011500 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.011516 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.011539 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.011557 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.114345 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.114401 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.114411 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.114433 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.114445 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.166149 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/0.log" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.170169 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.170617 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.189033 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\" 6932 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 20:29:11.318598 6932 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0313 20:29:11.318688 6932 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 20:29:11.318736 6932 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 20:29:11.318807 6932 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0313 20:29:11.318829 6932 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0313 20:29:11.318962 6932 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0313 20:29:11.318995 6932 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0313 20:29:11.319036 6932 factory.go:656] Stopping watch factory\\\\nI0313 20:29:11.319072 6932 ovnkube.go:599] Stopped ovnkube\\\\nI0313 20:29:11.319146 6932 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0313 20:29:11.319177 6932 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 20:29:11.319202 6932 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0313 20:29:11.319231 6932 handler.go:208] Removed *v1.Node event handler 2\\\\nI0313 20:29:11.319268 6932 handler.go:208] Removed *v1.Node event handler 7\\\\nI0313 20:29:11.319293 6932 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0313 20\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.201521 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.216569 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.216609 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.216620 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.216634 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.216644 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.219290 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.230223 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.243000 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.256226 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.270732 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.285149 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.300919 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.312406 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.318936 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.319001 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.319014 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.319034 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.319045 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.324837 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.336635 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.351339 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.365268 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.380402 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.421484 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.421820 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.421951 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.422070 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.422176 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.524581 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.524883 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.525122 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.525327 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.525492 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.599264 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:13 crc kubenswrapper[5029]: E0313 20:29:13.599644 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.599329 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:13 crc kubenswrapper[5029]: E0313 20:29:13.600342 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.599275 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:13 crc kubenswrapper[5029]: E0313 20:29:13.600621 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.599362 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:13 crc kubenswrapper[5029]: E0313 20:29:13.600837 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.628006 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.628055 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.628067 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.628084 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.628095 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.730785 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.730833 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.730844 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.730880 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.730892 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.833792 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.833834 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.833938 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.833958 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.833971 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.937531 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.937825 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.937948 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.938027 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:13 crc kubenswrapper[5029]: I0313 20:29:13.938099 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:13Z","lastTransitionTime":"2026-03-13T20:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.041307 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.041354 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.041363 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.041379 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.041388 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.143707 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.143745 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.143754 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.143767 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.143779 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.175600 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/1.log" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.176250 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/0.log" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.178976 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0" exitCode=1 Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.179016 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.179065 5029 scope.go:117] "RemoveContainer" containerID="7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.179941 5029 scope.go:117] "RemoveContainer" containerID="05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0" Mar 13 20:29:14 crc kubenswrapper[5029]: E0313 20:29:14.180118 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.200479 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.217663 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.233416 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.247248 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.247323 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.247345 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.247380 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.247405 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.250608 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.269179 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.287780 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.305505 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.320830 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.332754 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.346142 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.350690 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.350748 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.350762 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.350781 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.350795 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.360470 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.373450 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.384816 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.396283 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.417900 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bb5b73da43d80d6612f5d5fac1aff56dc55e09318d6f9c978716de46fe69f3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:11Z\\\",\\\"message\\\":\\\" 6932 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0313 20:29:11.318598 6932 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0313 20:29:11.318688 6932 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0313 20:29:11.318736 6932 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0313 20:29:11.318807 6932 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0313 20:29:11.318829 6932 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0313 20:29:11.318962 6932 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0313 20:29:11.318995 6932 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0313 20:29:11.319036 6932 factory.go:656] Stopping watch factory\\\\nI0313 20:29:11.319072 6932 ovnkube.go:599] Stopped ovnkube\\\\nI0313 20:29:11.319146 6932 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0313 20:29:11.319177 6932 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0313 20:29:11.319202 6932 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0313 20:29:11.319231 6932 handler.go:208] Removed *v1.Node event handler 2\\\\nI0313 20:29:11.319268 6932 handler.go:208] Removed *v1.Node event handler 7\\\\nI0313 20:29:11.319293 6932 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0313 20\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:13Z\\\",\\\"message\\\":\\\"network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0313 20:29:12.998146 7089 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0313 20:29:12.998234 7089 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-frlln\\\\nF0313 20:29:12.998367 7089 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z]\\\\nI0313 20:29:12.997822 7089 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-ch\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:14Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.454033 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.454115 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.454127 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.454141 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.454153 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.557287 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.557355 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.557372 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.557395 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.557411 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.660847 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.660924 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.660934 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.660953 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.660966 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.764480 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.764540 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.764561 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.764588 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.764644 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.868429 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.868489 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.868501 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.868524 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.868537 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.971754 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.971835 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.971896 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.971930 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:14 crc kubenswrapper[5029]: I0313 20:29:14.971951 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:14Z","lastTransitionTime":"2026-03-13T20:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.075056 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.075109 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.075129 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.075158 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.075171 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.178488 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.178535 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.178549 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.178569 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.178584 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.184586 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/1.log" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.190149 5029 scope.go:117] "RemoveContainer" containerID="05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0" Mar 13 20:29:15 crc kubenswrapper[5029]: E0313 20:29:15.190367 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.206163 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.219968 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.230609 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.241232 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.254399 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.275983 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:13Z\\\",\\\"message\\\":\\\"network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0313 20:29:12.998146 7089 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0313 20:29:12.998234 7089 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-frlln\\\\nF0313 20:29:12.998367 7089 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z]\\\\nI0313 20:29:12.997822 7089 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-ch\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.281482 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.281524 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.281536 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.281555 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.281567 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.291395 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.303790 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.318763 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.338989 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.360791 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.384406 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.384444 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.384453 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.384472 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.384482 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.395523 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.410127 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.424161 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.434249 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:15Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.486538 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.486579 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.486589 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.486604 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.486622 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.589399 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.589459 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.589472 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.589498 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.589518 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.599203 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.599203 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.599424 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.599247 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:15 crc kubenswrapper[5029]: E0313 20:29:15.599349 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:15 crc kubenswrapper[5029]: E0313 20:29:15.599464 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:15 crc kubenswrapper[5029]: E0313 20:29:15.599562 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:15 crc kubenswrapper[5029]: E0313 20:29:15.599643 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.691653 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.691737 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.691749 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.691763 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.691775 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.793972 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.794034 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.794055 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.794080 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.794097 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.897033 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.897100 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.897110 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.897130 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:15 crc kubenswrapper[5029]: I0313 20:29:15.897141 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:15Z","lastTransitionTime":"2026-03-13T20:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.000677 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.000747 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.000771 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.000796 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.000812 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.104077 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.104143 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.104155 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.104170 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.104184 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.208153 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.208213 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.208235 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.208257 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.208272 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.311585 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.311651 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.311670 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.311702 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.311747 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.415213 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.415262 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.415274 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.415291 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.415304 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.518249 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.518986 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.519012 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.519047 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.519070 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.623188 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.623260 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.623274 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.623297 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.623312 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.727407 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.727458 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.727475 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.727503 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.727521 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.830592 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.830642 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.830653 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.830669 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.830679 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.932214 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.932260 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.932271 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.932286 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:16 crc kubenswrapper[5029]: I0313 20:29:16.932297 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:16Z","lastTransitionTime":"2026-03-13T20:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.034541 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.034600 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.034616 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.034633 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.034646 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.137111 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.137193 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.137204 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.137226 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.137242 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.240485 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.240533 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.240543 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.240561 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.240572 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.343741 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.343815 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.343825 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.343843 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.343877 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.433983 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.434152 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.434199 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.434240 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434326 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:29:33.43427864 +0000 UTC m=+133.450361083 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434430 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.434428 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434459 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434478 5029 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434469 5029 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434556 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:33.434531927 +0000 UTC m=+133.450614360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434590 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434614 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434627 5029 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434647 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:33.434602239 +0000 UTC m=+133.450684802 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434660 5029 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434689 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:33.434675741 +0000 UTC m=+133.450758374 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.434758 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:33.434727363 +0000 UTC m=+133.450809806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.447500 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.447675 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.447707 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.447779 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.447810 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.535941 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.536093 5029 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.536147 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs podName:a301620b-657c-46c0-a1a4-f7774e38f273 nodeName:}" failed. No retries permitted until 2026-03-13 20:29:33.536130086 +0000 UTC m=+133.552212489 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs") pod "network-metrics-daemon-frlln" (UID: "a301620b-657c-46c0-a1a4-f7774e38f273") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.550486 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.550530 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.550541 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.550556 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.550565 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.598736 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.598794 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.598826 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.598916 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.598930 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.599015 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.599406 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:17 crc kubenswrapper[5029]: E0313 20:29:17.599486 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.653096 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.653154 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.653171 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.653194 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.653211 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.755865 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.755917 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.755928 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.755946 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.755958 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.857873 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.857921 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.857938 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.857956 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.857969 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.960061 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.960100 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.960111 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.960126 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:17 crc kubenswrapper[5029]: I0313 20:29:17.960138 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:17Z","lastTransitionTime":"2026-03-13T20:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.062799 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.062833 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.062841 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.062870 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.062880 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.164669 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.164706 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.164714 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.164727 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.164736 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.266738 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.266781 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.266789 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.266801 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.266809 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.370108 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.370148 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.370158 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.370174 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.370183 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.472300 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.472377 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.472392 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.472414 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.472430 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.575683 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.575739 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.575767 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.575787 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.575799 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.680408 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.680475 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.680490 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.680512 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.680532 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.783588 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.783640 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.783653 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.783667 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.783678 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.886388 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.886454 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.886469 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.886500 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.886516 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.989077 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.989122 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.989132 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.989148 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:18 crc kubenswrapper[5029]: I0313 20:29:18.989160 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:18Z","lastTransitionTime":"2026-03-13T20:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.091927 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.091997 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.092014 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.092033 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.092045 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.193941 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.193984 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.193994 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.194007 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.194017 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.296697 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.296767 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.296785 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.296808 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.296825 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.399355 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.399413 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.399422 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.399440 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.399451 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.502247 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.502306 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.502317 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.502339 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.502353 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.599486 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.599587 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:19 crc kubenswrapper[5029]: E0313 20:29:19.599692 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.599718 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.599505 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:19 crc kubenswrapper[5029]: E0313 20:29:19.599801 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:19 crc kubenswrapper[5029]: E0313 20:29:19.599899 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:19 crc kubenswrapper[5029]: E0313 20:29:19.600106 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.605310 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.605446 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.605541 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.605641 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.605703 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.709112 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.709168 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.709182 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.709203 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.709217 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.811707 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.811790 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.811803 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.811818 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.811828 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.914648 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.914704 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.914713 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.914728 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:19 crc kubenswrapper[5029]: I0313 20:29:19.914738 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:19Z","lastTransitionTime":"2026-03-13T20:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.021317 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.021392 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.021420 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.021456 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.021480 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:20Z","lastTransitionTime":"2026-03-13T20:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.125303 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.125367 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.125381 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.125404 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.125415 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:20Z","lastTransitionTime":"2026-03-13T20:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.228837 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.228896 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.228909 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.228926 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.228937 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:20Z","lastTransitionTime":"2026-03-13T20:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.332659 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.332712 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.332725 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.332749 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.332766 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:20Z","lastTransitionTime":"2026-03-13T20:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.435940 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.436000 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.436014 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.436034 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.436047 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:20Z","lastTransitionTime":"2026-03-13T20:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:20 crc kubenswrapper[5029]: E0313 20:29:20.536596 5029 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.617534 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.634620 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.649874 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.665398 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: E0313 20:29:20.680725 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.682384 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.694290 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.709963 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.722966 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.733972 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.746525 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.759610 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.781018 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:13Z\\\",\\\"message\\\":\\\"network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0313 20:29:12.998146 7089 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0313 20:29:12.998234 7089 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-frlln\\\\nF0313 20:29:12.998367 7089 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z]\\\\nI0313 20:29:12.997822 7089 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-ch\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.793347 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.803748 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:20 crc kubenswrapper[5029]: I0313 20:29:20.814052 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.598920 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.599092 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.599349 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.599373 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.599428 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.599430 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.599485 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.599622 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.645529 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.645562 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.645571 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.645584 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.645594 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:21Z","lastTransitionTime":"2026-03-13T20:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.658922 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:21Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.664269 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.664330 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.664344 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.664366 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.664378 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:21Z","lastTransitionTime":"2026-03-13T20:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.684427 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:21Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.689559 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.689616 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.689632 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.689655 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.689670 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:21Z","lastTransitionTime":"2026-03-13T20:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.704005 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:21Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.708345 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.708387 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.708400 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.708418 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.708432 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:21Z","lastTransitionTime":"2026-03-13T20:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.722347 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:21Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.726409 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.726488 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.726509 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.726540 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:21 crc kubenswrapper[5029]: I0313 20:29:21.726580 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:21Z","lastTransitionTime":"2026-03-13T20:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.745090 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:21Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:21 crc kubenswrapper[5029]: E0313 20:29:21.745336 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:29:22 crc kubenswrapper[5029]: I0313 20:29:22.611820 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 13 20:29:23 crc kubenswrapper[5029]: I0313 20:29:23.598616 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:23 crc kubenswrapper[5029]: I0313 20:29:23.598675 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:23 crc kubenswrapper[5029]: I0313 20:29:23.598616 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:23 crc kubenswrapper[5029]: E0313 20:29:23.598784 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:23 crc kubenswrapper[5029]: I0313 20:29:23.598637 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:23 crc kubenswrapper[5029]: E0313 20:29:23.598898 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:23 crc kubenswrapper[5029]: E0313 20:29:23.598994 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:23 crc kubenswrapper[5029]: E0313 20:29:23.599068 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:25 crc kubenswrapper[5029]: I0313 20:29:25.598710 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:25 crc kubenswrapper[5029]: I0313 20:29:25.598713 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:25 crc kubenswrapper[5029]: E0313 20:29:25.599357 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:25 crc kubenswrapper[5029]: I0313 20:29:25.598783 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:25 crc kubenswrapper[5029]: I0313 20:29:25.598755 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:25 crc kubenswrapper[5029]: E0313 20:29:25.599746 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:25 crc kubenswrapper[5029]: E0313 20:29:25.599963 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:25 crc kubenswrapper[5029]: E0313 20:29:25.599583 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:25 crc kubenswrapper[5029]: E0313 20:29:25.681938 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:29:26 crc kubenswrapper[5029]: I0313 20:29:26.599886 5029 scope.go:117] "RemoveContainer" containerID="05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0" Mar 13 20:29:26 crc kubenswrapper[5029]: I0313 20:29:26.928266 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:29:26 crc kubenswrapper[5029]: I0313 20:29:26.943892 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:26Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:26 crc kubenswrapper[5029]: I0313 20:29:26.956910 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:26Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:26 crc kubenswrapper[5029]: I0313 20:29:26.972783 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:26Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:26 crc kubenswrapper[5029]: I0313 20:29:26.985076 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:26Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.008745 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.033707 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:13Z\\\",\\\"message\\\":\\\"network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0313 20:29:12.998146 7089 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0313 20:29:12.998234 7089 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-frlln\\\\nF0313 20:29:12.998367 7089 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z]\\\\nI0313 20:29:12.997822 7089 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-ch\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.057626 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.071304 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.084562 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.095339 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.107629 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.120987 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.137970 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.153222 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.168472 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.187357 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.226976 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/1.log" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.230542 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579"} Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.231030 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.248880 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.262371 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.274714 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.289977 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.306891 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.318446 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.332310 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.344804 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.358826 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.370918 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.384597 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.406333 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:13Z\\\",\\\"message\\\":\\\"network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0313 20:29:12.998146 7089 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0313 20:29:12.998234 7089 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-frlln\\\\nF0313 20:29:12.998367 7089 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z]\\\\nI0313 20:29:12.997822 7089 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-ch\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.427655 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.443946 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.457517 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.469774 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:27Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.598884 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:27 crc kubenswrapper[5029]: E0313 20:29:27.599027 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.599126 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.598908 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:27 crc kubenswrapper[5029]: E0313 20:29:27.599290 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:27 crc kubenswrapper[5029]: I0313 20:29:27.599351 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:27 crc kubenswrapper[5029]: E0313 20:29:27.599403 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:27 crc kubenswrapper[5029]: E0313 20:29:27.599459 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.236682 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/2.log" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.238392 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/1.log" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.241763 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579" exitCode=1 Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.241806 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579"} Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.241874 5029 scope.go:117] "RemoveContainer" containerID="05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.242608 5029 scope.go:117] "RemoveContainer" containerID="92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579" Mar 13 20:29:28 crc kubenswrapper[5029]: E0313 20:29:28.242982 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.264684 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.279583 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.292099 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.306047 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.323068 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.338387 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.356334 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.373159 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.384757 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.395083 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.406590 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.416371 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.430203 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.439514 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.450736 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.468477 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b41888f05f5ddeb1bde38e7d60dec469f9d7855da766a7037267da003e8fe0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:13Z\\\",\\\"message\\\":\\\"network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0313 20:29:12.998146 7089 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0313 20:29:12.998234 7089 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-frlln\\\\nF0313 20:29:12.998367 7089 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:12Z is after 2025-08-24T17:21:41Z]\\\\nI0313 20:29:12.997822 7089 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-ch\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:28Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:28 crc kubenswrapper[5029]: I0313 20:29:28.612662 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.247360 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/2.log" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.253398 5029 scope.go:117] "RemoveContainer" containerID="92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579" Mar 13 20:29:29 crc kubenswrapper[5029]: E0313 20:29:29.253649 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.273616 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.288713 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.303550 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.317394 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.329479 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.351087 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.366978 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.377693 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.389471 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.401615 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.414172 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.424095 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.446654 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.462094 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.473443 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.483163 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.496142 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:29Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.598480 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.598535 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.598485 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:29 crc kubenswrapper[5029]: I0313 20:29:29.598696 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:29 crc kubenswrapper[5029]: E0313 20:29:29.598763 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:29 crc kubenswrapper[5029]: E0313 20:29:29.598662 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:29 crc kubenswrapper[5029]: E0313 20:29:29.598977 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:29 crc kubenswrapper[5029]: E0313 20:29:29.599057 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.614731 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.631094 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.646219 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.660513 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.675625 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: E0313 20:29:30.682500 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.698637 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.718192 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.745538 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.756982 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.769237 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.780326 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.791269 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.802014 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.812535 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.822304 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.832153 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:30 crc kubenswrapper[5029]: I0313 20:29:30.850329 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:30Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:31 crc kubenswrapper[5029]: I0313 20:29:31.598963 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:31 crc kubenswrapper[5029]: E0313 20:29:31.599246 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:31 crc kubenswrapper[5029]: I0313 20:29:31.599091 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:31 crc kubenswrapper[5029]: I0313 20:29:31.599311 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:31 crc kubenswrapper[5029]: E0313 20:29:31.599351 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:31 crc kubenswrapper[5029]: E0313 20:29:31.599577 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:31 crc kubenswrapper[5029]: I0313 20:29:31.599271 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:31 crc kubenswrapper[5029]: E0313 20:29:31.599800 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.004360 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.004609 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.004618 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.004631 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.004640 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:32Z","lastTransitionTime":"2026-03-13T20:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:32 crc kubenswrapper[5029]: E0313 20:29:32.018096 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:32Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.024145 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.024180 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.024192 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.024207 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.024218 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:32Z","lastTransitionTime":"2026-03-13T20:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:32 crc kubenswrapper[5029]: E0313 20:29:32.043238 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:32Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.047946 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.047992 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.048003 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.048022 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.048039 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:32Z","lastTransitionTime":"2026-03-13T20:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:32 crc kubenswrapper[5029]: E0313 20:29:32.061485 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:32Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.065415 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.065482 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.065493 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.065506 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.065515 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:32Z","lastTransitionTime":"2026-03-13T20:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:32 crc kubenswrapper[5029]: E0313 20:29:32.080056 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:32Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.083488 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.083515 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.083523 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.083538 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:32 crc kubenswrapper[5029]: I0313 20:29:32.083550 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:32Z","lastTransitionTime":"2026-03-13T20:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:32 crc kubenswrapper[5029]: E0313 20:29:32.097720 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:32Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:32 crc kubenswrapper[5029]: E0313 20:29:32.097834 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.506121 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:30:05.506086823 +0000 UTC m=+165.522169256 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.505954 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.507367 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.507540 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.507596 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.507627 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.507644 5029 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.507701 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 20:30:05.507688747 +0000 UTC m=+165.523771160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.507706 5029 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.507787 5029 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.507638 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.507905 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:30:05.507825081 +0000 UTC m=+165.523907594 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.507943 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.508051 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:30:05.508023686 +0000 UTC m=+165.524106119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.508065 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.508102 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.508120 5029 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.508177 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 20:30:05.50816027 +0000 UTC m=+165.524242683 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.599066 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.599181 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.599116 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.599365 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.599362 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.599503 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.599721 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.599798 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:33 crc kubenswrapper[5029]: I0313 20:29:33.608661 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.608872 5029 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:33 crc kubenswrapper[5029]: E0313 20:29:33.608949 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs podName:a301620b-657c-46c0-a1a4-f7774e38f273 nodeName:}" failed. No retries permitted until 2026-03-13 20:30:05.608927492 +0000 UTC m=+165.625009935 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs") pod "network-metrics-daemon-frlln" (UID: "a301620b-657c-46c0-a1a4-f7774e38f273") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:29:35 crc kubenswrapper[5029]: I0313 20:29:35.599234 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:35 crc kubenswrapper[5029]: E0313 20:29:35.599527 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:35 crc kubenswrapper[5029]: I0313 20:29:35.600293 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:35 crc kubenswrapper[5029]: E0313 20:29:35.600388 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:35 crc kubenswrapper[5029]: I0313 20:29:35.600465 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:35 crc kubenswrapper[5029]: E0313 20:29:35.600537 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:35 crc kubenswrapper[5029]: I0313 20:29:35.600596 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:35 crc kubenswrapper[5029]: E0313 20:29:35.600667 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:35 crc kubenswrapper[5029]: E0313 20:29:35.684301 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:29:37 crc kubenswrapper[5029]: I0313 20:29:37.598607 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:37 crc kubenswrapper[5029]: I0313 20:29:37.598693 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:37 crc kubenswrapper[5029]: I0313 20:29:37.598706 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:37 crc kubenswrapper[5029]: E0313 20:29:37.598781 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:37 crc kubenswrapper[5029]: I0313 20:29:37.598635 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:37 crc kubenswrapper[5029]: E0313 20:29:37.598926 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:37 crc kubenswrapper[5029]: E0313 20:29:37.599086 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:37 crc kubenswrapper[5029]: E0313 20:29:37.599204 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:39 crc kubenswrapper[5029]: I0313 20:29:39.599207 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:39 crc kubenswrapper[5029]: I0313 20:29:39.599244 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:39 crc kubenswrapper[5029]: I0313 20:29:39.599289 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:39 crc kubenswrapper[5029]: I0313 20:29:39.599224 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:39 crc kubenswrapper[5029]: E0313 20:29:39.599354 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:39 crc kubenswrapper[5029]: E0313 20:29:39.599564 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:39 crc kubenswrapper[5029]: E0313 20:29:39.599725 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:39 crc kubenswrapper[5029]: E0313 20:29:39.599833 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.621610 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.638819 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.659028 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.677053 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: E0313 20:29:40.685244 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.698695 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.717101 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.733905 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.751633 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.771610 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.785815 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.801389 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.814195 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.838469 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.856032 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.872796 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.887964 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:40 crc kubenswrapper[5029]: I0313 20:29:40.905261 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:40Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:41 crc kubenswrapper[5029]: I0313 20:29:41.599425 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:41 crc kubenswrapper[5029]: I0313 20:29:41.599543 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:41 crc kubenswrapper[5029]: I0313 20:29:41.599548 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:41 crc kubenswrapper[5029]: I0313 20:29:41.599659 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:41 crc kubenswrapper[5029]: E0313 20:29:41.599914 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:41 crc kubenswrapper[5029]: E0313 20:29:41.600032 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:41 crc kubenswrapper[5029]: E0313 20:29:41.600169 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:41 crc kubenswrapper[5029]: E0313 20:29:41.600314 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.363323 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.363968 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.363987 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.364012 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.364031 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:42Z","lastTransitionTime":"2026-03-13T20:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:42 crc kubenswrapper[5029]: E0313 20:29:42.378057 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:42Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.383165 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.383205 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.383217 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.383233 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.383243 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:42Z","lastTransitionTime":"2026-03-13T20:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:42 crc kubenswrapper[5029]: E0313 20:29:42.398495 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:42Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.402568 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.402614 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.402625 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.402643 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.402654 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:42Z","lastTransitionTime":"2026-03-13T20:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:42 crc kubenswrapper[5029]: E0313 20:29:42.418606 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:42Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.424132 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.424163 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.424173 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.424188 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.424199 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:42Z","lastTransitionTime":"2026-03-13T20:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:42 crc kubenswrapper[5029]: E0313 20:29:42.439447 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:42Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.447141 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.447189 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.447201 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.447220 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.447234 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:42Z","lastTransitionTime":"2026-03-13T20:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:42 crc kubenswrapper[5029]: E0313 20:29:42.463090 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:42Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:42 crc kubenswrapper[5029]: E0313 20:29:42.463234 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:29:42 crc kubenswrapper[5029]: I0313 20:29:42.599413 5029 scope.go:117] "RemoveContainer" containerID="92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579" Mar 13 20:29:42 crc kubenswrapper[5029]: E0313 20:29:42.599692 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:29:43 crc kubenswrapper[5029]: I0313 20:29:43.598890 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:43 crc kubenswrapper[5029]: E0313 20:29:43.599113 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:43 crc kubenswrapper[5029]: I0313 20:29:43.599214 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:43 crc kubenswrapper[5029]: I0313 20:29:43.599272 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:43 crc kubenswrapper[5029]: E0313 20:29:43.599492 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:43 crc kubenswrapper[5029]: I0313 20:29:43.599663 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:43 crc kubenswrapper[5029]: E0313 20:29:43.599914 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:43 crc kubenswrapper[5029]: E0313 20:29:43.600000 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:45 crc kubenswrapper[5029]: I0313 20:29:45.598981 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:45 crc kubenswrapper[5029]: E0313 20:29:45.599469 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:45 crc kubenswrapper[5029]: I0313 20:29:45.599029 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:45 crc kubenswrapper[5029]: I0313 20:29:45.599043 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:45 crc kubenswrapper[5029]: E0313 20:29:45.599669 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:45 crc kubenswrapper[5029]: E0313 20:29:45.599779 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:45 crc kubenswrapper[5029]: I0313 20:29:45.599050 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:45 crc kubenswrapper[5029]: E0313 20:29:45.599876 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:45 crc kubenswrapper[5029]: E0313 20:29:45.686754 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:29:47 crc kubenswrapper[5029]: I0313 20:29:47.598521 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:47 crc kubenswrapper[5029]: I0313 20:29:47.598564 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:47 crc kubenswrapper[5029]: I0313 20:29:47.598615 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:47 crc kubenswrapper[5029]: I0313 20:29:47.598653 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:47 crc kubenswrapper[5029]: E0313 20:29:47.598705 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:47 crc kubenswrapper[5029]: E0313 20:29:47.598794 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:47 crc kubenswrapper[5029]: E0313 20:29:47.598979 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:47 crc kubenswrapper[5029]: E0313 20:29:47.599093 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.323008 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/0.log" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.323125 5029 generic.go:334] "Generic (PLEG): container finished" podID="08946f02-ffb6-404b-b25c-6c261e8c2633" containerID="8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281" exitCode=1 Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.323180 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2thxr" event={"ID":"08946f02-ffb6-404b-b25c-6c261e8c2633","Type":"ContainerDied","Data":"8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281"} Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.323741 5029 scope.go:117] "RemoveContainer" containerID="8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.340356 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.355623 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.370302 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.383576 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.399454 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.414542 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.428684 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.445089 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.464448 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.476620 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.492878 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.505941 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.526116 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.540578 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.554122 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.568000 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.582103 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:49Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.598378 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.598464 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.598552 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:49 crc kubenswrapper[5029]: E0313 20:29:49.598556 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:49 crc kubenswrapper[5029]: E0313 20:29:49.598689 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:49 crc kubenswrapper[5029]: E0313 20:29:49.598753 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:49 crc kubenswrapper[5029]: I0313 20:29:49.598992 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:49 crc kubenswrapper[5029]: E0313 20:29:49.599221 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.328108 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/0.log" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.328951 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2thxr" event={"ID":"08946f02-ffb6-404b-b25c-6c261e8c2633","Type":"ContainerStarted","Data":"8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c"} Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.351445 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.369554 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.384383 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.396307 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.408994 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.421576 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.432219 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.446559 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.459322 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.474395 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.487448 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.499702 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.520935 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.534950 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.547545 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.559530 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.571367 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.609906 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.613237 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.623167 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.632896 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.644841 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.655297 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.675382 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: E0313 20:29:50.687268 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.691520 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.703102 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.715324 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.728145 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.739077 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.752237 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.764254 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.777610 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.793803 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.810034 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:50 crc kubenswrapper[5029]: I0313 20:29:50.821958 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:50Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:51 crc kubenswrapper[5029]: I0313 20:29:51.599266 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:51 crc kubenswrapper[5029]: E0313 20:29:51.599757 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:51 crc kubenswrapper[5029]: I0313 20:29:51.599294 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:51 crc kubenswrapper[5029]: E0313 20:29:51.599901 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:51 crc kubenswrapper[5029]: I0313 20:29:51.599345 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:51 crc kubenswrapper[5029]: I0313 20:29:51.599276 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:51 crc kubenswrapper[5029]: E0313 20:29:51.600035 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:51 crc kubenswrapper[5029]: E0313 20:29:51.600233 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.645761 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.645818 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.645830 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.645845 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.645874 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:52Z","lastTransitionTime":"2026-03-13T20:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:52 crc kubenswrapper[5029]: E0313 20:29:52.658349 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:52Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.662873 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.662901 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.662909 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.662923 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.662934 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:52Z","lastTransitionTime":"2026-03-13T20:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:52 crc kubenswrapper[5029]: E0313 20:29:52.676290 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:52Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.679679 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.679707 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.679718 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.679732 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.679744 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:52Z","lastTransitionTime":"2026-03-13T20:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:52 crc kubenswrapper[5029]: E0313 20:29:52.690308 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:52Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.693161 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.693194 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.693203 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.693216 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.693226 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:52Z","lastTransitionTime":"2026-03-13T20:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:52 crc kubenswrapper[5029]: E0313 20:29:52.704658 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:52Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.707976 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.708008 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.708018 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.708033 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:29:52 crc kubenswrapper[5029]: I0313 20:29:52.708045 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:29:52Z","lastTransitionTime":"2026-03-13T20:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:29:52 crc kubenswrapper[5029]: E0313 20:29:52.719241 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:52Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:52 crc kubenswrapper[5029]: E0313 20:29:52.719690 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:29:53 crc kubenswrapper[5029]: I0313 20:29:53.598989 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:53 crc kubenswrapper[5029]: I0313 20:29:53.599038 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:53 crc kubenswrapper[5029]: I0313 20:29:53.599199 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:53 crc kubenswrapper[5029]: E0313 20:29:53.599463 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:53 crc kubenswrapper[5029]: I0313 20:29:53.599742 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:53 crc kubenswrapper[5029]: E0313 20:29:53.599898 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:53 crc kubenswrapper[5029]: E0313 20:29:53.600091 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:53 crc kubenswrapper[5029]: E0313 20:29:53.600194 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:55 crc kubenswrapper[5029]: I0313 20:29:55.598628 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:55 crc kubenswrapper[5029]: I0313 20:29:55.598681 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:55 crc kubenswrapper[5029]: I0313 20:29:55.598688 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:55 crc kubenswrapper[5029]: E0313 20:29:55.598820 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:55 crc kubenswrapper[5029]: I0313 20:29:55.598920 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:55 crc kubenswrapper[5029]: E0313 20:29:55.599011 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:55 crc kubenswrapper[5029]: E0313 20:29:55.598965 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:55 crc kubenswrapper[5029]: E0313 20:29:55.599129 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:55 crc kubenswrapper[5029]: I0313 20:29:55.600886 5029 scope.go:117] "RemoveContainer" containerID="92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579" Mar 13 20:29:55 crc kubenswrapper[5029]: E0313 20:29:55.688960 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.358527 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/2.log" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.363428 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364"} Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.364286 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.381822 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.406000 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.419170 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.434512 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.450701 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.468789 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.485204 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.500724 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5f862a-c111-4dad-9e4e-9102ed7bd4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c4411d537505129b70428e21f20cf412ef5dd3003f7bd7b09a5b97fc5622809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.515160 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.528434 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.541884 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.552574 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.566192 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.578431 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.593164 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.605266 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.617915 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:56 crc kubenswrapper[5029]: I0313 20:29:56.637367 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:56Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.368644 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/3.log" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.369295 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/2.log" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.371870 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364" exitCode=1 Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.371910 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364"} Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.371943 5029 scope.go:117] "RemoveContainer" containerID="92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.372591 5029 scope.go:117] "RemoveContainer" containerID="536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364" Mar 13 20:29:57 crc kubenswrapper[5029]: E0313 20:29:57.372736 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.388486 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.400773 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.414208 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.426383 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.434489 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5f862a-c111-4dad-9e4e-9102ed7bd4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c4411d537505129b70428e21f20cf412ef5dd3003f7bd7b09a5b97fc5622809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.444260 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.454293 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.463409 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.474275 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.484687 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.496280 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.506237 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.517196 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.538512 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aba3978e311fa22e438527c7d57af075ac4c515546eb6f43fa8d631b44f579\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:27Z\\\",\\\"message\\\":\\\"Opts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0313 20:29:27.443466 7274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:56Z\\\",\\\"message\\\":\\\"r.go:360] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api for network=default : 1.357136ms\\\\nI0313 20:29:56.377522 7606 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 20:29:56.377525 7606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0313 20:29:56.377537 7606 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nI0313 20:29:56.377332 7606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0313 20:29:56.377605 7606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.552344 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.566579 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.577611 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.588733 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:57Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.598889 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:57 crc kubenswrapper[5029]: E0313 20:29:57.598993 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.598903 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.598890 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:57 crc kubenswrapper[5029]: E0313 20:29:57.599065 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:29:57 crc kubenswrapper[5029]: I0313 20:29:57.599159 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:57 crc kubenswrapper[5029]: E0313 20:29:57.599354 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:57 crc kubenswrapper[5029]: E0313 20:29:57.599422 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.376951 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/3.log" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.380688 5029 scope.go:117] "RemoveContainer" containerID="536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364" Mar 13 20:29:58 crc kubenswrapper[5029]: E0313 20:29:58.380998 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.395746 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.410537 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.426649 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.440727 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.456209 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.475545 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:56Z\\\",\\\"message\\\":\\\"r.go:360] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api for network=default : 1.357136ms\\\\nI0313 20:29:56.377522 7606 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 20:29:56.377525 7606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0313 20:29:56.377537 7606 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nI0313 20:29:56.377332 7606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0313 20:29:56.377605 7606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.491881 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.509438 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.525362 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.539093 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.556369 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.567796 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5f862a-c111-4dad-9e4e-9102ed7bd4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c4411d537505129b70428e21f20cf412ef5dd3003f7bd7b09a5b97fc5622809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.579162 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.590600 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.602248 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.612838 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.618065 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.633707 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:58 crc kubenswrapper[5029]: I0313 20:29:58.645597 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:29:58Z is after 2025-08-24T17:21:41Z" Mar 13 20:29:59 crc kubenswrapper[5029]: I0313 20:29:59.598536 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:29:59 crc kubenswrapper[5029]: I0313 20:29:59.598697 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:29:59 crc kubenswrapper[5029]: I0313 20:29:59.598888 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:29:59 crc kubenswrapper[5029]: I0313 20:29:59.598904 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:29:59 crc kubenswrapper[5029]: E0313 20:29:59.598902 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:29:59 crc kubenswrapper[5029]: E0313 20:29:59.599038 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:29:59 crc kubenswrapper[5029]: E0313 20:29:59.599075 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:29:59 crc kubenswrapper[5029]: E0313 20:29:59.599237 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.616098 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.630893 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.648572 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.664405 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.682563 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: E0313 20:30:00.689567 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.696769 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5f862a-c111-4dad-9e4e-9102ed7bd4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c4411d537505129b70428e21f20cf412ef5dd3003f7bd7b09a5b97fc5622809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.719287 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b8d712a-f16b-46de-a254-f67acc5db843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f31cb1a90ed9fa5f8ad95d32a2324a69d255b9cd27c9be511de9e0212d19c6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://041488a41735862f6541de001124dbf962d63581b6ada9ab9f22e5f8ed726cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f67cf81d042bc7f29637cd8043d414a9c2b413f36602cad57caaa402663e102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7089652396a5ebeda4285f2c39061bdc42021b133f87381666ea6cfe8536713f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5573add004a32067fde0ecfc2a9f880cf5b91d05d1304622eb26a7d36dbda4d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f94fc6f41ee63eae61c2f511d14cdf3806e5def2e55502466375b9a657e8b7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f94fc6f41ee63eae61c2f511d14cdf3806e5def2e55502466375b9a657e8b7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2ce1a7663e55379a9ae7620967204eaec693070b7744373b2f54a8488e96cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d2ce1a7663e55379a9ae7620967204eaec693070b7744373b2f54a8488e96cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://04053f7bf280a918b270eb2b1be6988ff69f88c293c3b601d1740509c1f552c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04053f7bf280a918b270eb2b1be6988ff69f88c293c3b601d1740509c1f552c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.734996 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.748076 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.759474 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.774465 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.787997 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.800641 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.813125 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.833690 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:56Z\\\",\\\"message\\\":\\\"r.go:360] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api for network=default : 1.357136ms\\\\nI0313 20:29:56.377522 7606 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 20:29:56.377525 7606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0313 20:29:56.377537 7606 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nI0313 20:29:56.377332 7606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0313 20:29:56.377605 7606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.847548 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.862880 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.877708 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:00 crc kubenswrapper[5029]: I0313 20:30:00.892194 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:00Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:01 crc kubenswrapper[5029]: I0313 20:30:01.598751 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:01 crc kubenswrapper[5029]: I0313 20:30:01.598785 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:01 crc kubenswrapper[5029]: E0313 20:30:01.598975 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:01 crc kubenswrapper[5029]: I0313 20:30:01.598779 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:01 crc kubenswrapper[5029]: E0313 20:30:01.599117 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:01 crc kubenswrapper[5029]: E0313 20:30:01.599201 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:01 crc kubenswrapper[5029]: I0313 20:30:01.599288 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:01 crc kubenswrapper[5029]: E0313 20:30:01.599357 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.916945 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.917882 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.918006 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.918113 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.918176 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:02Z","lastTransitionTime":"2026-03-13T20:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:02 crc kubenswrapper[5029]: E0313 20:30:02.931052 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:02Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.935970 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.936236 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.936317 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.936388 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.936455 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:02Z","lastTransitionTime":"2026-03-13T20:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:02 crc kubenswrapper[5029]: E0313 20:30:02.948538 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:02Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.953506 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.953550 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.953571 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.953589 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.953598 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:02Z","lastTransitionTime":"2026-03-13T20:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:02 crc kubenswrapper[5029]: E0313 20:30:02.966397 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:02Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.970423 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.970453 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.970462 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.970476 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.970486 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:02Z","lastTransitionTime":"2026-03-13T20:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:02 crc kubenswrapper[5029]: E0313 20:30:02.983444 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:02Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.987506 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.987643 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.987726 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.987919 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:02 crc kubenswrapper[5029]: I0313 20:30:02.988007 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:02Z","lastTransitionTime":"2026-03-13T20:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:02 crc kubenswrapper[5029]: E0313 20:30:02.999338 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:02Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:02 crc kubenswrapper[5029]: E0313 20:30:02.999450 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:30:03 crc kubenswrapper[5029]: I0313 20:30:03.598399 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:03 crc kubenswrapper[5029]: I0313 20:30:03.598480 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:03 crc kubenswrapper[5029]: E0313 20:30:03.598557 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:03 crc kubenswrapper[5029]: I0313 20:30:03.598514 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:03 crc kubenswrapper[5029]: I0313 20:30:03.598514 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:03 crc kubenswrapper[5029]: E0313 20:30:03.598692 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:03 crc kubenswrapper[5029]: E0313 20:30:03.598827 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:03 crc kubenswrapper[5029]: E0313 20:30:03.598935 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.569722 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.569946 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.569907724 +0000 UTC m=+229.585990147 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.570652 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.570756 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.570859 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.570916 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.570955 5029 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.570934 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571007 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.570992733 +0000 UTC m=+229.587075136 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571151 5029 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.570958 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571314 5029 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571126 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571363 5029 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571370 5029 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571299 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.571289401 +0000 UTC m=+229.587371804 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571394 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.571386893 +0000 UTC m=+229.587469296 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.571430 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.571399943 +0000 UTC m=+229.587482346 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.599378 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.599433 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.599463 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.599568 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.599589 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.599691 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.599911 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.599955 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:05 crc kubenswrapper[5029]: I0313 20:30:05.672013 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.672154 5029 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.672215 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs podName:a301620b-657c-46c0-a1a4-f7774e38f273 nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.672196946 +0000 UTC m=+229.688279339 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs") pod "network-metrics-daemon-frlln" (UID: "a301620b-657c-46c0-a1a4-f7774e38f273") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 20:30:05 crc kubenswrapper[5029]: E0313 20:30:05.690456 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:07 crc kubenswrapper[5029]: I0313 20:30:07.598709 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:07 crc kubenswrapper[5029]: E0313 20:30:07.599574 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:07 crc kubenswrapper[5029]: I0313 20:30:07.598911 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:07 crc kubenswrapper[5029]: E0313 20:30:07.599661 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:07 crc kubenswrapper[5029]: I0313 20:30:07.598938 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:07 crc kubenswrapper[5029]: E0313 20:30:07.599733 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:07 crc kubenswrapper[5029]: I0313 20:30:07.598723 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:07 crc kubenswrapper[5029]: E0313 20:30:07.599804 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:09 crc kubenswrapper[5029]: I0313 20:30:09.598637 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:09 crc kubenswrapper[5029]: I0313 20:30:09.598689 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:09 crc kubenswrapper[5029]: E0313 20:30:09.598783 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:09 crc kubenswrapper[5029]: I0313 20:30:09.598791 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:09 crc kubenswrapper[5029]: I0313 20:30:09.598734 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:09 crc kubenswrapper[5029]: E0313 20:30:09.598929 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:09 crc kubenswrapper[5029]: E0313 20:30:09.599019 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:09 crc kubenswrapper[5029]: E0313 20:30:09.599159 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.625233 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:56Z\\\",\\\"message\\\":\\\"r.go:360] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api for network=default : 1.357136ms\\\\nI0313 20:29:56.377522 7606 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 20:29:56.377525 7606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0313 20:29:56.377537 7606 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nI0313 20:29:56.377332 7606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0313 20:29:56.377605 7606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.643319 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.667426 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.684765 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: E0313 20:30:10.690946 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.700115 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.716037 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.731205 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.743562 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.757997 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.775062 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.796064 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.807943 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5f862a-c111-4dad-9e4e-9102ed7bd4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c4411d537505129b70428e21f20cf412ef5dd3003f7bd7b09a5b97fc5622809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.835201 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b8d712a-f16b-46de-a254-f67acc5db843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f31cb1a90ed9fa5f8ad95d32a2324a69d255b9cd27c9be511de9e0212d19c6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://041488a41735862f6541de001124dbf962d63581b6ada9ab9f22e5f8ed726cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f67cf81d042bc7f29637cd8043d414a9c2b413f36602cad57caaa402663e102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7089652396a5ebeda4285f2c39061bdc42021b133f87381666ea6cfe8536713f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5573add004a32067fde0ecfc2a9f880cf5b91d05d1304622eb26a7d36dbda4d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f94fc6f41ee63eae61c2f511d14cdf3806e5def2e55502466375b9a657e8b7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f94fc6f41ee63eae61c2f511d14cdf3806e5def2e55502466375b9a657e8b7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2ce1a7663e55379a9ae7620967204eaec693070b7744373b2f54a8488e96cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d2ce1a7663e55379a9ae7620967204eaec693070b7744373b2f54a8488e96cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://04053f7bf280a918b270eb2b1be6988ff69f88c293c3b601d1740509c1f552c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04053f7bf280a918b270eb2b1be6988ff69f88c293c3b601d1740509c1f552c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.865213 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.896341 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.908434 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.921009 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.932224 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:10 crc kubenswrapper[5029]: I0313 20:30:10.944602 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:10Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:11 crc kubenswrapper[5029]: I0313 20:30:11.599351 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:11 crc kubenswrapper[5029]: E0313 20:30:11.599774 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:11 crc kubenswrapper[5029]: I0313 20:30:11.599400 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:11 crc kubenswrapper[5029]: I0313 20:30:11.599504 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:11 crc kubenswrapper[5029]: I0313 20:30:11.599463 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:11 crc kubenswrapper[5029]: E0313 20:30:11.600009 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:11 crc kubenswrapper[5029]: E0313 20:30:11.600080 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:11 crc kubenswrapper[5029]: E0313 20:30:11.600128 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:12 crc kubenswrapper[5029]: I0313 20:30:12.599431 5029 scope.go:117] "RemoveContainer" containerID="536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364" Mar 13 20:30:12 crc kubenswrapper[5029]: E0313 20:30:12.599586 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.268798 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.268838 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.268847 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.268880 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.268891 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:13Z","lastTransitionTime":"2026-03-13T20:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.281524 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.286641 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.286675 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.286685 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.286699 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.286709 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:13Z","lastTransitionTime":"2026-03-13T20:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.298975 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.303339 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.303409 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.303427 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.303454 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.303470 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:13Z","lastTransitionTime":"2026-03-13T20:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.316468 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.320962 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.321007 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.321019 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.321039 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.321052 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:13Z","lastTransitionTime":"2026-03-13T20:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.335795 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.339179 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.339219 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.339233 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.339249 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.339260 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:13Z","lastTransitionTime":"2026-03-13T20:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.356987 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:13Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.357169 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.598764 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.598865 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.598846 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:13 crc kubenswrapper[5029]: I0313 20:30:13.598775 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.598994 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.599089 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.599226 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:13 crc kubenswrapper[5029]: E0313 20:30:13.599330 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:15 crc kubenswrapper[5029]: I0313 20:30:15.599367 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:15 crc kubenswrapper[5029]: I0313 20:30:15.599505 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:15 crc kubenswrapper[5029]: E0313 20:30:15.599584 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:15 crc kubenswrapper[5029]: I0313 20:30:15.599379 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:15 crc kubenswrapper[5029]: E0313 20:30:15.599695 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:15 crc kubenswrapper[5029]: I0313 20:30:15.599381 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:15 crc kubenswrapper[5029]: E0313 20:30:15.600068 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:15 crc kubenswrapper[5029]: E0313 20:30:15.600205 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:15 crc kubenswrapper[5029]: E0313 20:30:15.692889 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:17 crc kubenswrapper[5029]: I0313 20:30:17.599127 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:17 crc kubenswrapper[5029]: I0313 20:30:17.599270 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:17 crc kubenswrapper[5029]: E0313 20:30:17.599385 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:17 crc kubenswrapper[5029]: I0313 20:30:17.599449 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:17 crc kubenswrapper[5029]: E0313 20:30:17.599683 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:17 crc kubenswrapper[5029]: E0313 20:30:17.599901 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:17 crc kubenswrapper[5029]: I0313 20:30:17.599960 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:17 crc kubenswrapper[5029]: E0313 20:30:17.600112 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:19 crc kubenswrapper[5029]: I0313 20:30:19.598568 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:19 crc kubenswrapper[5029]: I0313 20:30:19.598624 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:19 crc kubenswrapper[5029]: I0313 20:30:19.598636 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:19 crc kubenswrapper[5029]: I0313 20:30:19.598652 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:19 crc kubenswrapper[5029]: E0313 20:30:19.598740 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:19 crc kubenswrapper[5029]: E0313 20:30:19.598874 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:19 crc kubenswrapper[5029]: E0313 20:30:19.599350 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:19 crc kubenswrapper[5029]: E0313 20:30:19.599568 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.613507 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce75265330e3a3efb92b84da5c93a8f820982dcc501f786b3d4b722d36031912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43db3bb8ab53a04ea973a10f6c482c792d1330dd267f1a8f10b54b2f2b86254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.625530 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e75121332bcbaaed8c91eac278ad3ba2dafbf06705125d62026ee2250b763f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.638358 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.648392 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jflsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae27301f-09d6-4818-8896-d53499075139\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad207e1606479f3346c17805c295498502f2f92a25f8d510f5da80d88db88f16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jflsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.659271 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-frlln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a301620b-657c-46c0-a1a4-f7774e38f273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2gf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-frlln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.678899 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:56Z\\\",\\\"message\\\":\\\"r.go:360] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api for network=default : 1.357136ms\\\\nI0313 20:29:56.377522 7606 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0313 20:29:56.377525 7606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0313 20:29:56.377537 7606 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nI0313 20:29:56.377332 7606 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0313 20:29:56.377605 7606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5nhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v2xrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.692726 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0889626a-1137-4012-81b3-ff8693b88b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357ced2c0a34974fa1a085405160a96507a772e2c47e15ab277ac8c0bcdf69c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c7253be2fac4731b426604d7be7fae349e102160aaffc43f9c5f0c717e55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:27:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0313 20:27:22.598373 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0313 20:27:22.600171 1 observer_polling.go:159] Starting file observer\\\\nI0313 20:27:22.627347 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0313 20:27:22.632570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0313 20:27:53.009427 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:27:52Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308a4241ea4e715d4e67e67f242504c5959f4fa330868bcca8c8722f22a01680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8e887dfde8ab5e1c8f0f58a6781255e3bb0781c793b58260db3c7c0828fe998\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: E0313 20:30:20.693596 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.708427 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2thxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08946f02-ffb6-404b-b25c-6c261e8c2633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-13T20:29:49Z\\\",\\\"message\\\":\\\"2026-03-13T20:29:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d\\\\n2026-03-13T20:29:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5bb33df3-7d1e-4625-b6ed-882cca5f111d to /host/opt/cni/bin/\\\\n2026-03-13T20:29:04Z [verbose] multus-daemon started\\\\n2026-03-13T20:29:04Z [verbose] Readiness Indicator file check\\\\n2026-03-13T20:29:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w2gz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2thxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.720070 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce41f1f-fd4c-42c0-b6ff-67410230a662\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1cca4daf5845fc53767e6bf8db41d5494d7c9f31f1035019212d283414c24a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbf90ccd3ee3559b532e5425b9b4d897c45e252dfdd54762089b2a1df012158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-546vn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z2p2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.732365 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa028723-a519-4f82-860c-4c149f3a4e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1514f8ab6eac4475e5171d715bc4703a5888a6198cc1757e4b682a65af6f7f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9sk7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28st2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.745198 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.759759 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836826d8431b438a8ba4ab628a2553e9236fc774151842c2eae350e8ddb0a44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.779079 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa07f40-f2db-461a-871b-85f3693e9069\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a7baefc46806a533d0f15930b3966df57a776945b7339bf942f6635244390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a6de31ec52858906e492f6a437bfe5f48a7c0e76a7f18435403219f73e9af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0a7f5ecc68e6a240b408ed4a74017e08468eba2f51187853329b557259a1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98b1a077abd9b663e53abfa78ac0a560401340e18bc9b8f7e02528b7ae68f60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://929331dcba734fff032843c0c27ca7cb5ad032b41f69c9809e5b409be7ab1304\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d6f4e4cefee77974837c3d6b221186f8bc00b4c0e021f9998f5846997a66f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631a9b32f8120028b1dcd5caf9000f4e31c8d77f2957d3b3579fffe8d121644e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:29:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfmv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zrq2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.793338 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"156f2844-a3fc-4b2b-affe-2340ca467835\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-13T20:28:22Z\\\",\\\"message\\\":\\\"ES_128_CBC_SHA256' detected.\\\\nW0313 20:28:22.769121 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0313 20:28:22.769125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0313 20:28:22.769130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0313 20:28:22.769133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0313 20:28:22.769176 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0313 20:28:22.774478 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774524 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0313 20:28:22.774564 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774589 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0313 20:28:22.774604 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0313 20:28:22.774606 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0313 20:28:22.774719 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0313 20:28:22.774744 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0313 20:28:22.775007 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0313 20:28:22.775136 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558909539/tls.crt::/tmp/serving-cert-3558909539/tls.key\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-13T20:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.808684 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5f862a-c111-4dad-9e4e-9102ed7bd4e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c4411d537505129b70428e21f20cf412ef5dd3003f7bd7b09a5b97fc5622809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37843f2c57b4fc1c82238ff720b38e7812873cf8295c5c996bf44364316080cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.828617 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b8d712a-f16b-46de-a254-f67acc5db843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f31cb1a90ed9fa5f8ad95d32a2324a69d255b9cd27c9be511de9e0212d19c6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://041488a41735862f6541de001124dbf962d63581b6ada9ab9f22e5f8ed726cea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f67cf81d042bc7f29637cd8043d414a9c2b413f36602cad57caaa402663e102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7089652396a5ebeda4285f2c39061bdc42021b133f87381666ea6cfe8536713f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5573add004a32067fde0ecfc2a9f880cf5b91d05d1304622eb26a7d36dbda4d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f94fc6f41ee63eae61c2f511d14cdf3806e5def2e55502466375b9a657e8b7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f94fc6f41ee63eae61c2f511d14cdf3806e5def2e55502466375b9a657e8b7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2ce1a7663e55379a9ae7620967204eaec693070b7744373b2f54a8488e96cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d2ce1a7663e55379a9ae7620967204eaec693070b7744373b2f54a8488e96cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://04053f7bf280a918b270eb2b1be6988ff69f88c293c3b601d1740509c1f552c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04053f7bf280a918b270eb2b1be6988ff69f88c293c3b601d1740509c1f552c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.846199 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52dc4255-57ff-4f0b-bf23-b93aa84dc9ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ced37b14e97f06294f7d2f3e96293845c2304e33e57b11699ab18e7acaee70e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e68877c13b3e151d9adfcac4c72fb670dccc76100b16d3a3c3190daf4a02bb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a08040facb09b8c5c0c31a876bf0bd95e7d2751a2aefd63c97656dd208a5fa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9b3f32623c542ffd8103a4403f4a019fafae96861a606a5c4566cb118a7fca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-13T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-13T20:27:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:27:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.863440 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:20 crc kubenswrapper[5029]: I0313 20:30:20.873492 5029 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5xkjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0fc000-74cb-4d5d-91b7-73d004abc007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T20:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81433144add2b97717e4edb14fd31e69fa19ea7def69b0a672b84c99292f7dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-13T20:29:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4tbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-13T20:29:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5xkjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:20Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:21 crc kubenswrapper[5029]: I0313 20:30:21.598789 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:21 crc kubenswrapper[5029]: I0313 20:30:21.598931 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:21 crc kubenswrapper[5029]: I0313 20:30:21.599195 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:21 crc kubenswrapper[5029]: I0313 20:30:21.599206 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:21 crc kubenswrapper[5029]: E0313 20:30:21.599332 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:21 crc kubenswrapper[5029]: E0313 20:30:21.599552 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:21 crc kubenswrapper[5029]: E0313 20:30:21.599637 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:21 crc kubenswrapper[5029]: E0313 20:30:21.599705 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.599205 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.599313 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.599355 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.599430 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.599423 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.599650 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.599705 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.599817 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.644684 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.644726 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.644734 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.644750 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.644761 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:23Z","lastTransitionTime":"2026-03-13T20:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.660407 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:23Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.665901 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.665940 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.665949 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.665965 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.665975 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:23Z","lastTransitionTime":"2026-03-13T20:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.679981 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:23Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.685243 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.685349 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.685374 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.685409 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.685445 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:23Z","lastTransitionTime":"2026-03-13T20:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.701632 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:23Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.713339 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.713396 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.713412 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.713438 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.713458 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:23Z","lastTransitionTime":"2026-03-13T20:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.730364 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:23Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.735309 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.735377 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.735403 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.735438 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:23 crc kubenswrapper[5029]: I0313 20:30:23.735461 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:23Z","lastTransitionTime":"2026-03-13T20:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.751945 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-13T20:30:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"044ba3ae-5433-4825-be63-55a4dd605347\\\",\\\"systemUUID\\\":\\\"b7fabe7c-62aa-4375-98f0-4d41d9f3bdaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-13T20:30:23Z is after 2025-08-24T17:21:41Z" Mar 13 20:30:23 crc kubenswrapper[5029]: E0313 20:30:23.752218 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:30:25 crc kubenswrapper[5029]: I0313 20:30:25.598545 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:25 crc kubenswrapper[5029]: I0313 20:30:25.598590 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:25 crc kubenswrapper[5029]: E0313 20:30:25.598694 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:25 crc kubenswrapper[5029]: I0313 20:30:25.598568 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:25 crc kubenswrapper[5029]: I0313 20:30:25.598773 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:25 crc kubenswrapper[5029]: E0313 20:30:25.598951 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:25 crc kubenswrapper[5029]: E0313 20:30:25.599070 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:25 crc kubenswrapper[5029]: E0313 20:30:25.599115 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:25 crc kubenswrapper[5029]: E0313 20:30:25.694813 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:27 crc kubenswrapper[5029]: I0313 20:30:27.598808 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:27 crc kubenswrapper[5029]: I0313 20:30:27.599107 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:27 crc kubenswrapper[5029]: E0313 20:30:27.599101 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:27 crc kubenswrapper[5029]: I0313 20:30:27.599194 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:27 crc kubenswrapper[5029]: I0313 20:30:27.599525 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:27 crc kubenswrapper[5029]: E0313 20:30:27.599575 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:27 crc kubenswrapper[5029]: E0313 20:30:27.599944 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:27 crc kubenswrapper[5029]: I0313 20:30:27.599957 5029 scope.go:117] "RemoveContainer" containerID="536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364" Mar 13 20:30:27 crc kubenswrapper[5029]: E0313 20:30:27.600102 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v2xrv_openshift-ovn-kubernetes(ed9df53f-1a1d-4cbc-997a-79dbe299d2b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" Mar 13 20:30:27 crc kubenswrapper[5029]: E0313 20:30:27.600215 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:29 crc kubenswrapper[5029]: I0313 20:30:29.599301 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:29 crc kubenswrapper[5029]: I0313 20:30:29.599311 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:29 crc kubenswrapper[5029]: I0313 20:30:29.599429 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:29 crc kubenswrapper[5029]: E0313 20:30:29.599744 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:29 crc kubenswrapper[5029]: E0313 20:30:29.599830 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:29 crc kubenswrapper[5029]: I0313 20:30:29.599329 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:29 crc kubenswrapper[5029]: E0313 20:30:29.600010 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:29 crc kubenswrapper[5029]: E0313 20:30:29.600176 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.666091 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.66605874 podStartE2EDuration="1m8.66605874s" podCreationTimestamp="2026-03-13 20:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.665974508 +0000 UTC m=+190.682056901" watchObservedRunningTime="2026-03-13 20:30:30.66605874 +0000 UTC m=+190.682141143" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.683194 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2thxr" podStartSLOduration=141.683176854 podStartE2EDuration="2m21.683176854s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.682947278 +0000 UTC m=+190.699029711" watchObservedRunningTime="2026-03-13 20:30:30.683176854 +0000 UTC m=+190.699259257" Mar 13 20:30:30 crc kubenswrapper[5029]: E0313 20:30:30.695508 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.701415 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z2p2c" podStartSLOduration=140.701381348 podStartE2EDuration="2m20.701381348s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.700288578 +0000 UTC m=+190.716370991" watchObservedRunningTime="2026-03-13 20:30:30.701381348 +0000 UTC m=+190.717463761" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.714263 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podStartSLOduration=141.714238546 podStartE2EDuration="2m21.714238546s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.714143644 +0000 UTC m=+190.730226047" watchObservedRunningTime="2026-03-13 20:30:30.714238546 +0000 UTC m=+190.730320989" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.809060 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zrq2k" podStartSLOduration=141.809035706 podStartE2EDuration="2m21.809035706s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.789337272 +0000 UTC m=+190.805419685" watchObservedRunningTime="2026-03-13 20:30:30.809035706 +0000 UTC m=+190.825118109" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.823172 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.823135328 podStartE2EDuration="1m19.823135328s" podCreationTimestamp="2026-03-13 20:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.809656123 +0000 UTC m=+190.825738526" watchObservedRunningTime="2026-03-13 20:30:30.823135328 +0000 UTC m=+190.839217751" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.823387 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=40.823379355 podStartE2EDuration="40.823379355s" podCreationTimestamp="2026-03-13 20:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.822685096 +0000 UTC m=+190.838767509" watchObservedRunningTime="2026-03-13 20:30:30.823379355 +0000 UTC m=+190.839461788" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.852726 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=32.852400902 podStartE2EDuration="32.852400902s" podCreationTimestamp="2026-03-13 20:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.851937889 +0000 UTC m=+190.868020312" watchObservedRunningTime="2026-03-13 20:30:30.852400902 +0000 UTC m=+190.868483305" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.874514 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=62.874482931 podStartE2EDuration="1m2.874482931s" podCreationTimestamp="2026-03-13 20:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.872949099 +0000 UTC m=+190.889031542" watchObservedRunningTime="2026-03-13 20:30:30.874482931 +0000 UTC m=+190.890565354" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.889143 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5xkjw" podStartSLOduration=141.889111658 podStartE2EDuration="2m21.889111658s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.88883953 +0000 UTC m=+190.904921983" watchObservedRunningTime="2026-03-13 20:30:30.889111658 +0000 UTC m=+190.905194061" Mar 13 20:30:30 crc kubenswrapper[5029]: I0313 20:30:30.956151 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jflsf" podStartSLOduration=141.956126134 podStartE2EDuration="2m21.956126134s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:30.955462746 +0000 UTC m=+190.971545149" watchObservedRunningTime="2026-03-13 20:30:30.956126134 +0000 UTC m=+190.972208537" Mar 13 20:30:31 crc kubenswrapper[5029]: I0313 20:30:31.599400 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:31 crc kubenswrapper[5029]: E0313 20:30:31.599983 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:31 crc kubenswrapper[5029]: I0313 20:30:31.599826 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:31 crc kubenswrapper[5029]: I0313 20:30:31.600154 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:31 crc kubenswrapper[5029]: I0313 20:30:31.599777 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:31 crc kubenswrapper[5029]: E0313 20:30:31.600327 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:31 crc kubenswrapper[5029]: E0313 20:30:31.600389 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:31 crc kubenswrapper[5029]: E0313 20:30:31.600526 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.598720 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:33 crc kubenswrapper[5029]: E0313 20:30:33.598847 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.599083 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.599233 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.599162 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:33 crc kubenswrapper[5029]: E0313 20:30:33.599591 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:33 crc kubenswrapper[5029]: E0313 20:30:33.599713 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:33 crc kubenswrapper[5029]: E0313 20:30:33.599513 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.867156 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.867490 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.867590 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.867683 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.867787 5029 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T20:30:33Z","lastTransitionTime":"2026-03-13T20:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.911144 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf"] Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.914622 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.917958 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.918234 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.919467 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.920385 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.990144 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a18c75d-a468-49c9-ada5-bbc3df9ef619-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.990190 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a18c75d-a468-49c9-ada5-bbc3df9ef619-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.990261 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a18c75d-a468-49c9-ada5-bbc3df9ef619-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.990437 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a18c75d-a468-49c9-ada5-bbc3df9ef619-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:33 crc kubenswrapper[5029]: I0313 20:30:33.990499 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a18c75d-a468-49c9-ada5-bbc3df9ef619-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.091515 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a18c75d-a468-49c9-ada5-bbc3df9ef619-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.092245 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a18c75d-a468-49c9-ada5-bbc3df9ef619-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.092276 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a18c75d-a468-49c9-ada5-bbc3df9ef619-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.092297 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a18c75d-a468-49c9-ada5-bbc3df9ef619-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.092322 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a18c75d-a468-49c9-ada5-bbc3df9ef619-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.092407 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a18c75d-a468-49c9-ada5-bbc3df9ef619-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.092449 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a18c75d-a468-49c9-ada5-bbc3df9ef619-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.093148 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a18c75d-a468-49c9-ada5-bbc3df9ef619-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.100547 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a18c75d-a468-49c9-ada5-bbc3df9ef619-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.116197 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a18c75d-a468-49c9-ada5-bbc3df9ef619-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n5vhf\" (UID: \"4a18c75d-a468-49c9-ada5-bbc3df9ef619\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.233674 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.528041 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" event={"ID":"4a18c75d-a468-49c9-ada5-bbc3df9ef619","Type":"ContainerStarted","Data":"068d8e25d66713522acc5650b8bb1e242467dcc911b79454901fbe5601fdd3da"} Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.528675 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" event={"ID":"4a18c75d-a468-49c9-ada5-bbc3df9ef619","Type":"ContainerStarted","Data":"469b6a93c3090eda2f017d35c6013caf27101a8305fe76c4d7671ec88d3cc50a"} Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.549893 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n5vhf" podStartSLOduration=145.549832599 podStartE2EDuration="2m25.549832599s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:34.548627307 +0000 UTC m=+194.564709730" watchObservedRunningTime="2026-03-13 20:30:34.549832599 +0000 UTC m=+194.565915022" Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.645742 5029 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 13 20:30:34 crc kubenswrapper[5029]: I0313 20:30:34.657653 5029 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.532004 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/1.log" Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.532679 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/0.log" Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.532754 5029 generic.go:334] "Generic (PLEG): container finished" podID="08946f02-ffb6-404b-b25c-6c261e8c2633" containerID="8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c" exitCode=1 Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.532799 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2thxr" event={"ID":"08946f02-ffb6-404b-b25c-6c261e8c2633","Type":"ContainerDied","Data":"8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c"} Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.532841 5029 scope.go:117] "RemoveContainer" containerID="8ff817d4424924af297da83bfdba192255dcb91d4ad9c638971459b4d8fd2281" Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.533379 5029 scope.go:117] "RemoveContainer" containerID="8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c" Mar 13 20:30:35 crc kubenswrapper[5029]: E0313 20:30:35.533551 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2thxr_openshift-multus(08946f02-ffb6-404b-b25c-6c261e8c2633)\"" pod="openshift-multus/multus-2thxr" podUID="08946f02-ffb6-404b-b25c-6c261e8c2633" Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.599331 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.599353 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.599390 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:35 crc kubenswrapper[5029]: E0313 20:30:35.599458 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:35 crc kubenswrapper[5029]: I0313 20:30:35.599616 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:35 crc kubenswrapper[5029]: E0313 20:30:35.599733 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:35 crc kubenswrapper[5029]: E0313 20:30:35.599896 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:35 crc kubenswrapper[5029]: E0313 20:30:35.600133 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:35 crc kubenswrapper[5029]: E0313 20:30:35.697566 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:36 crc kubenswrapper[5029]: I0313 20:30:36.537412 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/1.log" Mar 13 20:30:37 crc kubenswrapper[5029]: I0313 20:30:37.598767 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:37 crc kubenswrapper[5029]: I0313 20:30:37.598790 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:37 crc kubenswrapper[5029]: E0313 20:30:37.598920 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:37 crc kubenswrapper[5029]: I0313 20:30:37.598937 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:37 crc kubenswrapper[5029]: I0313 20:30:37.598951 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:37 crc kubenswrapper[5029]: E0313 20:30:37.599012 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:37 crc kubenswrapper[5029]: E0313 20:30:37.599199 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:37 crc kubenswrapper[5029]: E0313 20:30:37.599351 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:38 crc kubenswrapper[5029]: I0313 20:30:38.599363 5029 scope.go:117] "RemoveContainer" containerID="536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364" Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.422846 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-frlln"] Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.423288 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:39 crc kubenswrapper[5029]: E0313 20:30:39.423393 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.549806 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/3.log" Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.552664 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerStarted","Data":"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4"} Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.553188 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.589145 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podStartSLOduration=149.589128889 podStartE2EDuration="2m29.589128889s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:30:39.588315667 +0000 UTC m=+199.604398120" watchObservedRunningTime="2026-03-13 20:30:39.589128889 +0000 UTC m=+199.605211292" Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.599280 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.599356 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:39 crc kubenswrapper[5029]: I0313 20:30:39.599294 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:39 crc kubenswrapper[5029]: E0313 20:30:39.599418 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:39 crc kubenswrapper[5029]: E0313 20:30:39.599527 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:39 crc kubenswrapper[5029]: E0313 20:30:39.599737 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:40 crc kubenswrapper[5029]: E0313 20:30:40.698986 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:41 crc kubenswrapper[5029]: I0313 20:30:41.598499 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:41 crc kubenswrapper[5029]: I0313 20:30:41.598553 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:41 crc kubenswrapper[5029]: I0313 20:30:41.598586 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:41 crc kubenswrapper[5029]: E0313 20:30:41.598641 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:41 crc kubenswrapper[5029]: I0313 20:30:41.598513 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:41 crc kubenswrapper[5029]: E0313 20:30:41.598998 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:41 crc kubenswrapper[5029]: E0313 20:30:41.599221 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:41 crc kubenswrapper[5029]: E0313 20:30:41.599382 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:43 crc kubenswrapper[5029]: I0313 20:30:43.599021 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:43 crc kubenswrapper[5029]: I0313 20:30:43.599063 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:43 crc kubenswrapper[5029]: I0313 20:30:43.599132 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:43 crc kubenswrapper[5029]: I0313 20:30:43.599030 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:43 crc kubenswrapper[5029]: E0313 20:30:43.599139 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:43 crc kubenswrapper[5029]: E0313 20:30:43.599223 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:43 crc kubenswrapper[5029]: E0313 20:30:43.599300 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:43 crc kubenswrapper[5029]: E0313 20:30:43.599386 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:45 crc kubenswrapper[5029]: I0313 20:30:45.599957 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:45 crc kubenswrapper[5029]: I0313 20:30:45.600011 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:45 crc kubenswrapper[5029]: E0313 20:30:45.600507 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:45 crc kubenswrapper[5029]: I0313 20:30:45.600044 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:45 crc kubenswrapper[5029]: E0313 20:30:45.600638 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:45 crc kubenswrapper[5029]: I0313 20:30:45.600045 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:45 crc kubenswrapper[5029]: E0313 20:30:45.600696 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:45 crc kubenswrapper[5029]: E0313 20:30:45.600417 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:45 crc kubenswrapper[5029]: E0313 20:30:45.700084 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:47 crc kubenswrapper[5029]: I0313 20:30:47.599222 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:47 crc kubenswrapper[5029]: I0313 20:30:47.599275 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:47 crc kubenswrapper[5029]: I0313 20:30:47.599301 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:47 crc kubenswrapper[5029]: I0313 20:30:47.599274 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:47 crc kubenswrapper[5029]: E0313 20:30:47.599375 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:47 crc kubenswrapper[5029]: E0313 20:30:47.599469 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:47 crc kubenswrapper[5029]: E0313 20:30:47.599755 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:47 crc kubenswrapper[5029]: E0313 20:30:47.600167 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:49 crc kubenswrapper[5029]: I0313 20:30:49.598899 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:49 crc kubenswrapper[5029]: E0313 20:30:49.599040 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:49 crc kubenswrapper[5029]: I0313 20:30:49.599072 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:49 crc kubenswrapper[5029]: I0313 20:30:49.599106 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:49 crc kubenswrapper[5029]: I0313 20:30:49.599144 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:49 crc kubenswrapper[5029]: E0313 20:30:49.599151 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:49 crc kubenswrapper[5029]: E0313 20:30:49.599297 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:49 crc kubenswrapper[5029]: E0313 20:30:49.599357 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:50 crc kubenswrapper[5029]: I0313 20:30:50.600181 5029 scope.go:117] "RemoveContainer" containerID="8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c" Mar 13 20:30:50 crc kubenswrapper[5029]: E0313 20:30:50.700865 5029 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 20:30:51 crc kubenswrapper[5029]: I0313 20:30:51.592984 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/1.log" Mar 13 20:30:51 crc kubenswrapper[5029]: I0313 20:30:51.593286 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2thxr" event={"ID":"08946f02-ffb6-404b-b25c-6c261e8c2633","Type":"ContainerStarted","Data":"a15b0ae3ffa521840adc6903e498024f19ac00b1f6f98a7564d70fbded2c3161"} Mar 13 20:30:51 crc kubenswrapper[5029]: I0313 20:30:51.598897 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:51 crc kubenswrapper[5029]: I0313 20:30:51.598932 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:51 crc kubenswrapper[5029]: I0313 20:30:51.598963 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:51 crc kubenswrapper[5029]: I0313 20:30:51.599028 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:51 crc kubenswrapper[5029]: E0313 20:30:51.599021 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:51 crc kubenswrapper[5029]: E0313 20:30:51.599147 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:51 crc kubenswrapper[5029]: E0313 20:30:51.599242 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:51 crc kubenswrapper[5029]: E0313 20:30:51.599289 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:53 crc kubenswrapper[5029]: I0313 20:30:53.598964 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:53 crc kubenswrapper[5029]: I0313 20:30:53.598999 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:53 crc kubenswrapper[5029]: I0313 20:30:53.599066 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:53 crc kubenswrapper[5029]: I0313 20:30:53.598967 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:53 crc kubenswrapper[5029]: E0313 20:30:53.599143 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:53 crc kubenswrapper[5029]: E0313 20:30:53.599240 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:53 crc kubenswrapper[5029]: E0313 20:30:53.599352 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:53 crc kubenswrapper[5029]: E0313 20:30:53.599456 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:55 crc kubenswrapper[5029]: I0313 20:30:55.599786 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:55 crc kubenswrapper[5029]: E0313 20:30:55.600016 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 13 20:30:55 crc kubenswrapper[5029]: I0313 20:30:55.599802 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:55 crc kubenswrapper[5029]: E0313 20:30:55.600124 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 13 20:30:55 crc kubenswrapper[5029]: I0313 20:30:55.600259 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:55 crc kubenswrapper[5029]: E0313 20:30:55.600464 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-frlln" podUID="a301620b-657c-46c0-a1a4-f7774e38f273" Mar 13 20:30:55 crc kubenswrapper[5029]: I0313 20:30:55.600715 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:55 crc kubenswrapper[5029]: E0313 20:30:55.601028 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.599438 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.599529 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.599530 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.599462 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.602230 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.603513 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.603606 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.603875 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.604390 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 13 20:30:57 crc kubenswrapper[5029]: I0313 20:30:57.605212 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 20:31:01 crc kubenswrapper[5029]: I0313 20:31:01.978185 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.636214 5029 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.687795 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.688744 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.689051 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.690333 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9h8rj"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.690714 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.691509 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.691756 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.692100 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.692217 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.694914 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-695v5"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.695701 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.695910 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.696257 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.696448 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.696455 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.696546 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.696705 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.697423 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.697440 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.699986 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-audit-policies\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700028 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abe01612-cef6-4c5b-aea8-627ab1418706-audit-dir\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700051 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-etcd-client\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700070 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700089 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-encryption-config\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700115 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-serving-cert\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700147 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66hbm\" (UniqueName: \"kubernetes.io/projected/d3a5bbe6-2908-4756-9e53-58240ec41df8-kube-api-access-66hbm\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700165 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fdm\" (UniqueName: \"kubernetes.io/projected/abe01612-cef6-4c5b-aea8-627ab1418706-kube-api-access-57fdm\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700187 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348fb583-d159-4f35-aefe-d7e8384a2d36-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700207 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-client-ca\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700225 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348fb583-d159-4f35-aefe-d7e8384a2d36-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700242 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzj8\" (UniqueName: \"kubernetes.io/projected/348fb583-d159-4f35-aefe-d7e8384a2d36-kube-api-access-zvzj8\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700259 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-config\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700278 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700298 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700313 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a5bbe6-2908-4756-9e53-58240ec41df8-serving-cert\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.700513 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5vkn2"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.701146 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.701432 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl427"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.701760 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.702219 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.702477 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.702560 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mmwnc"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.703371 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.706461 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.706460 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.706897 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.706899 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.715573 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.715902 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.720507 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.721564 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.721582 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.722004 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.722685 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.723294 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.723760 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.723992 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.723769 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.723833 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.724057 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.724183 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.724230 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.724271 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.727148 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.727811 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.728016 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.728101 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.728187 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.728281 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.729117 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.737031 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.741324 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.741382 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.741442 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.745535 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.748402 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.761997 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.762231 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.762304 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.762468 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.762521 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.762608 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.764150 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.768111 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.768319 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.768372 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.768392 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.768507 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.769153 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.768552 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.769395 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.768622 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.768709 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.769040 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.769083 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.773805 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.785531 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.785686 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.785749 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.785832 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.785838 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.785916 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.785928 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.785562 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.786001 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.789282 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.789576 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.790783 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.792038 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.790092 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.790138 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.790225 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.790261 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.790264 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.790554 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.790731 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.794697 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zb64j"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.795173 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.795488 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.795764 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.795790 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.796400 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802229 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/544fe537-df82-45eb-932c-89a3387540e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802277 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802318 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d41501e-682f-47d2-867d-fa61bd7e4bf1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802360 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d41501e-682f-47d2-867d-fa61bd7e4bf1-trusted-ca\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802387 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-node-pullsecrets\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802409 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-audit-dir\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802431 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-policies\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802460 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnczs\" (UniqueName: \"kubernetes.io/projected/230fc7d2-389f-45a1-b610-a10fb92b8796-kube-api-access-hnczs\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802495 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-audit\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802523 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802549 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvv6\" (UniqueName: \"kubernetes.io/projected/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-kube-api-access-qbvv6\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802579 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-audit-policies\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802603 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abe01612-cef6-4c5b-aea8-627ab1418706-audit-dir\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802626 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfnk\" (UniqueName: \"kubernetes.io/projected/de7331b0-d805-4b94-909a-61de2cb70ce1-kube-api-access-lnfnk\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802651 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-dir\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802673 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-client-ca\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802696 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544fe537-df82-45eb-932c-89a3387540e3-config\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802727 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802753 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-etcd-client\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802774 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-serving-cert\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802801 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802826 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/230fc7d2-389f-45a1-b610-a10fb92b8796-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802871 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-encryption-config\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802897 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vqt\" (UniqueName: \"kubernetes.io/projected/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-kube-api-access-96vqt\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802922 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.802969 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-encryption-config\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803103 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-images\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803138 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx5t6\" (UniqueName: \"kubernetes.io/projected/edba0b8e-1343-45d0-a37f-23ed39bfddab-kube-api-access-kx5t6\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803163 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhkv\" (UniqueName: \"kubernetes.io/projected/1d41501e-682f-47d2-867d-fa61bd7e4bf1-kube-api-access-qbhkv\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803185 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803206 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803232 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/edba0b8e-1343-45d0-a37f-23ed39bfddab-profile-collector-cert\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803266 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-serving-cert\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803293 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-etcd-client\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803382 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/edba0b8e-1343-45d0-a37f-23ed39bfddab-srv-cert\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803428 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803457 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66hbm\" (UniqueName: \"kubernetes.io/projected/d3a5bbe6-2908-4756-9e53-58240ec41df8-kube-api-access-66hbm\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803487 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn67l\" (UniqueName: \"kubernetes.io/projected/573fa1e5-a683-4cd2-a3d6-037732c07f53-kube-api-access-vn67l\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803511 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-config\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803542 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57fdm\" (UniqueName: \"kubernetes.io/projected/abe01612-cef6-4c5b-aea8-627ab1418706-kube-api-access-57fdm\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803566 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-etcd-serving-ca\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803591 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-config\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803616 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/573fa1e5-a683-4cd2-a3d6-037732c07f53-auth-proxy-config\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803641 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/68f8cfe3-1b3f-4145-9060-bc1c70762016-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8wtl\" (UID: \"68f8cfe3-1b3f-4145-9060-bc1c70762016\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803670 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/544fe537-df82-45eb-932c-89a3387540e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803696 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803725 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348fb583-d159-4f35-aefe-d7e8384a2d36-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803749 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de7331b0-d805-4b94-909a-61de2cb70ce1-serving-cert\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803776 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-client-ca\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803806 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803896 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-config\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803924 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803951 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348fb583-d159-4f35-aefe-d7e8384a2d36-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.803996 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzj8\" (UniqueName: \"kubernetes.io/projected/348fb583-d159-4f35-aefe-d7e8384a2d36-kube-api-access-zvzj8\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804023 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804047 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-config\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804074 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804167 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573fa1e5-a683-4cd2-a3d6-037732c07f53-config\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804210 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/230fc7d2-389f-45a1-b610-a10fb92b8796-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804267 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804290 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a5bbe6-2908-4756-9e53-58240ec41df8-serving-cert\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804309 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-image-import-ca\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804334 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/573fa1e5-a683-4cd2-a3d6-037732c07f53-machine-approver-tls\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804352 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68v4q\" (UniqueName: \"kubernetes.io/projected/68f8cfe3-1b3f-4145-9060-bc1c70762016-kube-api-access-68v4q\") pod \"cluster-samples-operator-665b6dd947-j8wtl\" (UID: \"68f8cfe3-1b3f-4145-9060-bc1c70762016\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804375 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804402 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d47s8\" (UniqueName: \"kubernetes.io/projected/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-kube-api-access-d47s8\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804417 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d41501e-682f-47d2-867d-fa61bd7e4bf1-metrics-tls\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804445 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.805574 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/abe01612-cef6-4c5b-aea8-627ab1418706-audit-dir\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.805962 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-audit-policies\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.805967 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804178 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.806527 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/abe01612-cef6-4c5b-aea8-627ab1418706-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.807711 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348fb583-d159-4f35-aefe-d7e8384a2d36-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.808394 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-client-ca\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804453 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.804503 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.808926 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-config\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.811450 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-serving-cert\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.813167 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.813397 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v97fz"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.813564 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.814076 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a5bbe6-2908-4756-9e53-58240ec41df8-serving-cert\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.814658 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lbggs"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.814939 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.815288 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.815574 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.815843 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.816344 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.816568 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.816653 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.816719 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.817910 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.818391 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.818037 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.819359 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.811538 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.819282 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.834446 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-encryption-config\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.834867 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/abe01612-cef6-4c5b-aea8-627ab1418706-etcd-client\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.835162 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348fb583-d159-4f35-aefe-d7e8384a2d36-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.839702 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.843498 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.843837 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.845648 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.853413 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.853567 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.854094 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.853576 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.864630 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.868310 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.868506 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.868637 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.868763 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.868872 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.868999 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.870021 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.870050 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.870133 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.870468 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.871069 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzj8\" (UniqueName: \"kubernetes.io/projected/348fb583-d159-4f35-aefe-d7e8384a2d36-kube-api-access-zvzj8\") pod \"openshift-apiserver-operator-796bbdcf4f-bqdpc\" (UID: \"348fb583-d159-4f35-aefe-d7e8384a2d36\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.874382 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.875008 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x8hs6"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.875481 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.876139 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h2jnz"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.876460 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.876639 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.876743 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.876772 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.877260 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bdvmc"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.878054 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.879018 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66hbm\" (UniqueName: \"kubernetes.io/projected/d3a5bbe6-2908-4756-9e53-58240ec41df8-kube-api-access-66hbm\") pod \"controller-manager-879f6c89f-9h8rj\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.879088 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-x5x9w"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.879740 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-x5x9w" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.880020 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.880788 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.882197 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rvlhd"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.882606 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.883178 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ttzqw"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.883717 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.887769 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.891084 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.894752 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nwl2k"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.895197 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.895289 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.895412 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.895536 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.899760 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.899820 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mmwnc"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.899831 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.901498 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl427"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.902033 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.903286 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.904072 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5vkn2"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905142 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905476 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57fdm\" (UniqueName: \"kubernetes.io/projected/abe01612-cef6-4c5b-aea8-627ab1418706-kube-api-access-57fdm\") pod \"apiserver-7bbb656c7d-bhrxn\" (UID: \"abe01612-cef6-4c5b-aea8-627ab1418706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905727 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-node-pullsecrets\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905759 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-audit-dir\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905789 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d41501e-682f-47d2-867d-fa61bd7e4bf1-trusted-ca\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905814 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-policies\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905837 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnczs\" (UniqueName: \"kubernetes.io/projected/230fc7d2-389f-45a1-b610-a10fb92b8796-kube-api-access-hnczs\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905895 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-node-pullsecrets\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905902 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.905980 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-audit\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906004 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbvv6\" (UniqueName: \"kubernetes.io/projected/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-kube-api-access-qbvv6\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906031 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfnk\" (UniqueName: \"kubernetes.io/projected/de7331b0-d805-4b94-909a-61de2cb70ce1-kube-api-access-lnfnk\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906054 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-dir\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906075 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-client-ca\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906102 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544fe537-df82-45eb-932c-89a3387540e3-config\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906120 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906140 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-serving-cert\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906148 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906177 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/230fc7d2-389f-45a1-b610-a10fb92b8796-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906212 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vqt\" (UniqueName: \"kubernetes.io/projected/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-kube-api-access-96vqt\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906238 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-encryption-config\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906264 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-images\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906296 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx5t6\" (UniqueName: \"kubernetes.io/projected/edba0b8e-1343-45d0-a37f-23ed39bfddab-kube-api-access-kx5t6\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906320 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906370 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906395 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906428 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbhkv\" (UniqueName: \"kubernetes.io/projected/1d41501e-682f-47d2-867d-fa61bd7e4bf1-kube-api-access-qbhkv\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906456 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-etcd-client\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906482 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/edba0b8e-1343-45d0-a37f-23ed39bfddab-srv-cert\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906513 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/edba0b8e-1343-45d0-a37f-23ed39bfddab-profile-collector-cert\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906558 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906595 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn67l\" (UniqueName: \"kubernetes.io/projected/573fa1e5-a683-4cd2-a3d6-037732c07f53-kube-api-access-vn67l\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906621 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-config\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906656 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-etcd-serving-ca\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906680 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/573fa1e5-a683-4cd2-a3d6-037732c07f53-auth-proxy-config\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906711 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/68f8cfe3-1b3f-4145-9060-bc1c70762016-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8wtl\" (UID: \"68f8cfe3-1b3f-4145-9060-bc1c70762016\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906741 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-config\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906762 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906788 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/544fe537-df82-45eb-932c-89a3387540e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906809 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de7331b0-d805-4b94-909a-61de2cb70ce1-serving-cert\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906844 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906901 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-config\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906924 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906960 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.906998 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573fa1e5-a683-4cd2-a3d6-037732c07f53-config\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907028 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-image-import-ca\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907054 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/573fa1e5-a683-4cd2-a3d6-037732c07f53-machine-approver-tls\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907081 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/230fc7d2-389f-45a1-b610-a10fb92b8796-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907115 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907113 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68v4q\" (UniqueName: \"kubernetes.io/projected/68f8cfe3-1b3f-4145-9060-bc1c70762016-kube-api-access-68v4q\") pod \"cluster-samples-operator-665b6dd947-j8wtl\" (UID: \"68f8cfe3-1b3f-4145-9060-bc1c70762016\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907184 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907209 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d41501e-682f-47d2-867d-fa61bd7e4bf1-metrics-tls\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907235 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d47s8\" (UniqueName: \"kubernetes.io/projected/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-kube-api-access-d47s8\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907263 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907284 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907304 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/544fe537-df82-45eb-932c-89a3387540e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907334 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d41501e-682f-47d2-867d-fa61bd7e4bf1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.907699 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-audit-dir\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.908269 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-audit\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.908462 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-dir\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.908792 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d41501e-682f-47d2-867d-fa61bd7e4bf1-trusted-ca\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.909699 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544fe537-df82-45eb-932c-89a3387540e3-config\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.910403 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-policies\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.910956 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.911243 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573fa1e5-a683-4cd2-a3d6-037732c07f53-config\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.913961 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.915316 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dg27c"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.915915 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-image-import-ca\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.916535 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/544fe537-df82-45eb-932c-89a3387540e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.916668 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.922741 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.925059 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de7331b0-d805-4b94-909a-61de2cb70ce1-serving-cert\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.925365 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-config\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.925385 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d41501e-682f-47d2-867d-fa61bd7e4bf1-metrics-tls\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.927707 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-etcd-serving-ca\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.928541 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.929218 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-client-ca\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.929375 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-config\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.930084 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.930517 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/573fa1e5-a683-4cd2-a3d6-037732c07f53-machine-approver-tls\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.930954 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.931532 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.931545 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.931708 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-encryption-config\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.931717 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/230fc7d2-389f-45a1-b610-a10fb92b8796-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.932010 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.932190 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.933489 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/573fa1e5-a683-4cd2-a3d6-037732c07f53-auth-proxy-config\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.934084 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-etcd-client\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.934578 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/edba0b8e-1343-45d0-a37f-23ed39bfddab-srv-cert\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.935168 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.935590 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.935645 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zb64j"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.935625 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/230fc7d2-389f-45a1-b610-a10fb92b8796-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.935682 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.935945 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.936312 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557230-trnjq"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.937526 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.937965 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/edba0b8e-1343-45d0-a37f-23ed39bfddab-profile-collector-cert\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.938173 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-config\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.938497 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/68f8cfe3-1b3f-4145-9060-bc1c70762016-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8wtl\" (UID: \"68f8cfe3-1b3f-4145-9060-bc1c70762016\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.939108 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.939163 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.940527 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.940830 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557230-trnjq" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.941000 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-images\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.946678 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.946773 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-serving-cert\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.947948 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.950223 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dh52p"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.951311 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.952821 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.953358 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rvqxj"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.954726 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.955959 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.957178 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.958607 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.960226 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.964072 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.964069 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.965734 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.968500 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x8hs6"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.970634 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9h8rj"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.972248 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rvlhd"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.972755 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.974552 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dg27c"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.975644 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bdvmc"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.976991 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.978072 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rjjb9"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.979517 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-sxmb7"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.980049 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sxmb7" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.980104 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.980368 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.982381 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nwl2k"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.983267 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.984751 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.986004 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lbggs"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.988496 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.990383 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v97fz"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.991987 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.992193 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.993612 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.996967 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-x5x9w"] Mar 13 20:31:04 crc kubenswrapper[5029]: I0313 20:31:04.998766 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.000247 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ttzqw"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.002538 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.004000 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rjjb9"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.005369 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dh52p"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.006521 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.007879 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557230-trnjq"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.009188 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sxmb7"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.010391 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.011607 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.012156 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.015426 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.031693 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.051651 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.071463 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.083886 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.092204 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.107995 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.113898 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.132757 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.152121 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.174345 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.193424 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.207788 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.218358 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.234352 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.253183 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.275821 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.323265 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.323440 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.342501 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.351061 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.367491 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9h8rj"] Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.372315 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: W0313 20:31:05.375991 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a5bbe6_2908_4756_9e53_58240ec41df8.slice/crio-df47d6907789728c6cec05d88bee6a3c50f56a4322b1604570e6a0c0eeb15674 WatchSource:0}: Error finding container df47d6907789728c6cec05d88bee6a3c50f56a4322b1604570e6a0c0eeb15674: Status 404 returned error can't find the container with id df47d6907789728c6cec05d88bee6a3c50f56a4322b1604570e6a0c0eeb15674 Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.392150 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.412359 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.432671 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.452765 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.472409 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.491830 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.512310 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.531918 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.552231 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.575731 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.600785 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.612387 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.632397 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.652332 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.677459 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.682501 5029 generic.go:334] "Generic (PLEG): container finished" podID="abe01612-cef6-4c5b-aea8-627ab1418706" containerID="c97cfdc6d7985c37d4babf6e8c4960647014bc89e216b53a850ddca35b1e6933" exitCode=0 Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.682595 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" event={"ID":"abe01612-cef6-4c5b-aea8-627ab1418706","Type":"ContainerDied","Data":"c97cfdc6d7985c37d4babf6e8c4960647014bc89e216b53a850ddca35b1e6933"} Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.682669 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" event={"ID":"abe01612-cef6-4c5b-aea8-627ab1418706","Type":"ContainerStarted","Data":"1ccf88f083df5b9ad3a0460b75b3e822a44d28fea533308964293b9ae863365e"} Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.684874 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" event={"ID":"d3a5bbe6-2908-4756-9e53-58240ec41df8","Type":"ContainerStarted","Data":"a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54"} Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.684948 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" event={"ID":"d3a5bbe6-2908-4756-9e53-58240ec41df8","Type":"ContainerStarted","Data":"df47d6907789728c6cec05d88bee6a3c50f56a4322b1604570e6a0c0eeb15674"} Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.685084 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.686788 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" event={"ID":"348fb583-d159-4f35-aefe-d7e8384a2d36","Type":"ContainerStarted","Data":"b0dbf694f6902ac05271ce75c31bdce0e2ee8a714dd3eb381d73dd09a3dd4e06"} Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.686840 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" event={"ID":"348fb583-d159-4f35-aefe-d7e8384a2d36","Type":"ContainerStarted","Data":"985d092bea1612ee800345b688a9202f8de8d3da93f9fc3f84cb84564170820e"} Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.687740 5029 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9h8rj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.687793 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" podUID="d3a5bbe6-2908-4756-9e53-58240ec41df8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.692140 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.711872 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.732226 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.762122 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.774020 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.792229 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.812147 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.832652 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.856085 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.874208 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.890584 5029 request.go:700] Waited for 1.006630705s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.894067 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.916199 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.932544 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.952309 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.978062 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 20:31:05 crc kubenswrapper[5029]: I0313 20:31:05.990873 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.012502 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.032421 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.052773 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.072405 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.092459 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.112352 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.132145 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.192107 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68v4q\" (UniqueName: \"kubernetes.io/projected/68f8cfe3-1b3f-4145-9060-bc1c70762016-kube-api-access-68v4q\") pod \"cluster-samples-operator-665b6dd947-j8wtl\" (UID: \"68f8cfe3-1b3f-4145-9060-bc1c70762016\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.215843 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d41501e-682f-47d2-867d-fa61bd7e4bf1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.246515 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.246930 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbvv6\" (UniqueName: \"kubernetes.io/projected/e6046521-c7e4-4f5d-b5ad-81e436fe2d1f-kube-api-access-qbvv6\") pod \"apiserver-76f77b778f-5vkn2\" (UID: \"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f\") " pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.257012 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfnk\" (UniqueName: \"kubernetes.io/projected/de7331b0-d805-4b94-909a-61de2cb70ce1-kube-api-access-lnfnk\") pod \"route-controller-manager-6576b87f9c-zkjm5\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.271716 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn67l\" (UniqueName: \"kubernetes.io/projected/573fa1e5-a683-4cd2-a3d6-037732c07f53-kube-api-access-vn67l\") pod \"machine-approver-56656f9798-695v5\" (UID: \"573fa1e5-a683-4cd2-a3d6-037732c07f53\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.288690 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnczs\" (UniqueName: \"kubernetes.io/projected/230fc7d2-389f-45a1-b610-a10fb92b8796-kube-api-access-hnczs\") pod \"openshift-controller-manager-operator-756b6f6bc6-djlcm\" (UID: \"230fc7d2-389f-45a1-b610-a10fb92b8796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.310881 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d47s8\" (UniqueName: \"kubernetes.io/projected/7e26e65c-4cb6-4094-b92b-9b4e0b36253b-kube-api-access-d47s8\") pod \"machine-api-operator-5694c8668f-mmwnc\" (UID: \"7e26e65c-4cb6-4094-b92b-9b4e0b36253b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.338727 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/544fe537-df82-45eb-932c-89a3387540e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jwhw8\" (UID: \"544fe537-df82-45eb-932c-89a3387540e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.345381 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.354133 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vqt\" (UniqueName: \"kubernetes.io/projected/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-kube-api-access-96vqt\") pod \"oauth-openshift-558db77b4-sl427\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.355272 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.373287 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.393534 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.402445 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.413176 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.413712 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.426460 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl"] Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.431346 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.437781 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.456747 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.468500 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx5t6\" (UniqueName: \"kubernetes.io/projected/edba0b8e-1343-45d0-a37f-23ed39bfddab-kube-api-access-kx5t6\") pod \"catalog-operator-68c6474976-cchj6\" (UID: \"edba0b8e-1343-45d0-a37f-23ed39bfddab\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.472973 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.491987 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.493103 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbhkv\" (UniqueName: \"kubernetes.io/projected/1d41501e-682f-47d2-867d-fa61bd7e4bf1-kube-api-access-qbhkv\") pod \"ingress-operator-5b745b69d9-t2twj\" (UID: \"1d41501e-682f-47d2-867d-fa61bd7e4bf1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.515386 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.532112 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.554896 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.573050 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.597710 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.599742 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.614037 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.633694 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.652986 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.662655 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl427"] Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.672068 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.692598 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.696482 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" event={"ID":"abe01612-cef6-4c5b-aea8-627ab1418706","Type":"ContainerStarted","Data":"54d240f00dcb8caac0a36e50425cda337acb74607133d61c77036548958c2f0d"} Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.702689 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" event={"ID":"e9f4273c-6ab2-48dd-af0c-f6f03b91d037","Type":"ContainerStarted","Data":"57e8089bf2e476fc3a6832eed1ef34966aadd08d0fe107465e5caa7ae4d10c4a"} Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.705375 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" event={"ID":"68f8cfe3-1b3f-4145-9060-bc1c70762016","Type":"ContainerStarted","Data":"b3936dc49d4bca1175c95211bad7460e2497c5f2b741a2826e8650dc72615bbf"} Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.714613 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.727101 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" event={"ID":"573fa1e5-a683-4cd2-a3d6-037732c07f53","Type":"ContainerStarted","Data":"e68b37da20e8f7326990311f55b4886a69c344b243b015b02cd67203e2827160"} Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.728603 5029 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9h8rj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.728652 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" podUID="d3a5bbe6-2908-4756-9e53-58240ec41df8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.736078 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.752390 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.766890 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.772801 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.780679 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.785182 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5"] Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.795032 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.812585 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.834293 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.853232 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.866265 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm"] Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.871539 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.893110 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.909067 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8"] Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.910379 5029 request.go:700] Waited for 1.944137542s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0 Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.912193 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.932709 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.951569 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5vkn2"] Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.955895 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.957213 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mmwnc"] Mar 13 20:31:06 crc kubenswrapper[5029]: W0313 20:31:06.969662 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod544fe537_df82_45eb_932c_89a3387540e3.slice/crio-c2e20560dc746e3ff3cde6788f50a139bacad1d899de88cc2f7d0e3f0f3701f5 WatchSource:0}: Error finding container c2e20560dc746e3ff3cde6788f50a139bacad1d899de88cc2f7d0e3f0f3701f5: Status 404 returned error can't find the container with id c2e20560dc746e3ff3cde6788f50a139bacad1d899de88cc2f7d0e3f0f3701f5 Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.971758 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 13 20:31:06 crc kubenswrapper[5029]: W0313 20:31:06.972310 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6046521_c7e4_4f5d_b5ad_81e436fe2d1f.slice/crio-6f4d8bd5073de427e6a6ff3db4cdf1b93f9b3b4eeb04c69111fc9be430663683 WatchSource:0}: Error finding container 6f4d8bd5073de427e6a6ff3db4cdf1b93f9b3b4eeb04c69111fc9be430663683: Status 404 returned error can't find the container with id 6f4d8bd5073de427e6a6ff3db4cdf1b93f9b3b4eeb04c69111fc9be430663683 Mar 13 20:31:06 crc kubenswrapper[5029]: I0313 20:31:06.998223 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.011210 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.035129 5029 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.055524 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.163748 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6"] Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173561 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drtfq\" (UniqueName: \"kubernetes.io/projected/8a1ea22d-3be3-412d-be38-ab360aae90e5-kube-api-access-drtfq\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173602 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z84c4\" (UniqueName: \"kubernetes.io/projected/33a7299c-87fb-43cf-a916-1946c218ad78-kube-api-access-z84c4\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173620 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpzk9\" (UniqueName: \"kubernetes.io/projected/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-kube-api-access-kpzk9\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173641 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-oauth-config\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173658 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31b17eb1-07a9-4cfb-9589-e45a4ac62791-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173683 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ft89\" (UniqueName: \"kubernetes.io/projected/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-kube-api-access-5ft89\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173711 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82531657-5b20-4b32-a23c-3dbe4370c657-serving-cert\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173758 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4mbg\" (UniqueName: \"kubernetes.io/projected/55243e70-3d3c-44df-ac61-d298330ff633-kube-api-access-l4mbg\") pod \"downloads-7954f5f757-x5x9w\" (UID: \"55243e70-3d3c-44df-ac61-d298330ff633\") " pod="openshift-console/downloads-7954f5f757-x5x9w" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173776 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5k74\" (UniqueName: \"kubernetes.io/projected/c6fda68b-609a-4564-9fd8-ccfd526fa9de-kube-api-access-n5k74\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173793 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-config\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.173821 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5787c5c-be3a-43cc-bf49-46573f2b31c1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.178171 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-registry-certificates\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.178834 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-config\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.178958 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-service-ca\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.178999 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-oauth-serving-cert\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.179656 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-client\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180034 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvmh\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-kube-api-access-9nvmh\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180111 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180137 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-config\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180195 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-trusted-ca-bundle\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180218 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzfv7\" (UniqueName: \"kubernetes.io/projected/82531657-5b20-4b32-a23c-3dbe4370c657-kube-api-access-tzfv7\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180245 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-trusted-ca\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180269 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/33a7299c-87fb-43cf-a916-1946c218ad78-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180299 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-tmpfs\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180324 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6fda68b-609a-4564-9fd8-ccfd526fa9de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180391 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-registry-tls\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180415 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-serving-cert\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180438 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-trusted-ca\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180492 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-service-ca-bundle\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180542 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-metrics-certs\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180570 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb5e50b8-e1b8-4351-9556-d4da3816791d-proxy-tls\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.180644 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31b17eb1-07a9-4cfb-9589-e45a4ac62791-images\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.182553 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.182642 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/120ab712-4dde-43e5-8e14-f755accec059-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.182696 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrzmb\" (UniqueName: \"kubernetes.io/projected/784d49a1-0554-4b42-aa6b-35f4ab0dcc7a-kube-api-access-nrzmb\") pod \"package-server-manager-789f6589d5-cb72p\" (UID: \"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.182779 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l924k\" (UniqueName: \"kubernetes.io/projected/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-kube-api-access-l924k\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.182884 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5sw\" (UniqueName: \"kubernetes.io/projected/ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1-kube-api-access-rs5sw\") pod \"multus-admission-controller-857f4d67dd-x8hs6\" (UID: \"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.182957 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-service-ca\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.183377 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-serving-cert\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.184664 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kp9b\" (UniqueName: \"kubernetes.io/projected/fb5e50b8-e1b8-4351-9556-d4da3816791d-kube-api-access-8kp9b\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.184710 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.184752 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/120ab712-4dde-43e5-8e14-f755accec059-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.184774 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhd5\" (UniqueName: \"kubernetes.io/projected/38ba7d36-baaf-4e14-aa8e-5236ee9500de-kube-api-access-qfhd5\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.184799 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgr5v\" (UniqueName: \"kubernetes.io/projected/1dab1066-bb46-406d-b993-4e6ca669447f-kube-api-access-zgr5v\") pod \"dns-operator-744455d44c-ttzqw\" (UID: \"1dab1066-bb46-406d-b993-4e6ca669447f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.184825 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-apiservice-cert\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.185538 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.185643 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1dab1066-bb46-406d-b993-4e6ca669447f-metrics-tls\") pod \"dns-operator-744455d44c-ttzqw\" (UID: \"1dab1066-bb46-406d-b993-4e6ca669447f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186307 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6fda68b-609a-4564-9fd8-ccfd526fa9de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186594 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186629 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-service-ca-bundle\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186659 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x8hs6\" (UID: \"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186687 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e879012b-d78a-4309-819f-fa76fc8fdec3-serving-cert\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186715 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33a7299c-87fb-43cf-a916-1946c218ad78-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186743 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186772 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-config\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.186810 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-config\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.196328 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31b17eb1-07a9-4cfb-9589-e45a4ac62791-proxy-tls\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.196414 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgj6\" (UniqueName: \"kubernetes.io/projected/31b17eb1-07a9-4cfb-9589-e45a4ac62791-kube-api-access-nhgj6\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.196452 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5787c5c-be3a-43cc-bf49-46573f2b31c1-serving-cert\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.196517 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-default-certificate\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.196545 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-bound-sa-token\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.196634 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb5e50b8-e1b8-4351-9556-d4da3816791d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.199449 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnbf\" (UniqueName: \"kubernetes.io/projected/067c1734-d7ab-4e50-b020-1b65f0350169-kube-api-access-zfnbf\") pod \"migrator-59844c95c7-ljj46\" (UID: \"067c1734-d7ab-4e50-b020-1b65f0350169\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.199513 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52qdd\" (UniqueName: \"kubernetes.io/projected/e879012b-d78a-4309-819f-fa76fc8fdec3-kube-api-access-52qdd\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.199538 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-webhook-cert\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.199607 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-ca\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.199640 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/784d49a1-0554-4b42-aa6b-35f4ab0dcc7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cb72p\" (UID: \"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.199778 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-stats-auth\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.199918 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2r42\" (UniqueName: \"kubernetes.io/projected/c5787c5c-be3a-43cc-bf49-46573f2b31c1-kube-api-access-s2r42\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.199957 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a7299c-87fb-43cf-a916-1946c218ad78-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.204323 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:07.704296124 +0000 UTC m=+227.720378737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.245046 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj"] Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.306733 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.306871 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb5e50b8-e1b8-4351-9556-d4da3816791d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.306895 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnbf\" (UniqueName: \"kubernetes.io/projected/067c1734-d7ab-4e50-b020-1b65f0350169-kube-api-access-zfnbf\") pod \"migrator-59844c95c7-ljj46\" (UID: \"067c1734-d7ab-4e50-b020-1b65f0350169\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.306917 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnvq\" (UniqueName: \"kubernetes.io/projected/e6222f4e-fb93-4b18-a790-7f4affeb8232-kube-api-access-bbnvq\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.306936 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52qdd\" (UniqueName: \"kubernetes.io/projected/e879012b-d78a-4309-819f-fa76fc8fdec3-kube-api-access-52qdd\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.306955 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-webhook-cert\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.306969 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-ca\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.306985 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/784d49a1-0554-4b42-aa6b-35f4ab0dcc7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cb72p\" (UID: \"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307002 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-stats-auth\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307016 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2r42\" (UniqueName: \"kubernetes.io/projected/c5787c5c-be3a-43cc-bf49-46573f2b31c1-kube-api-access-s2r42\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307032 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f94341bb-1e1c-4a8d-bf68-92658a9c0632-signing-cabundle\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307050 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a7299c-87fb-43cf-a916-1946c218ad78-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307070 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqj4h\" (UniqueName: \"kubernetes.io/projected/87be7113-65b4-48fc-9c93-a7bbb0bf9136-kube-api-access-xqj4h\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307086 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drtfq\" (UniqueName: \"kubernetes.io/projected/8a1ea22d-3be3-412d-be38-ab360aae90e5-kube-api-access-drtfq\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307103 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87be7113-65b4-48fc-9c93-a7bbb0bf9136-metrics-tls\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307120 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84c4\" (UniqueName: \"kubernetes.io/projected/33a7299c-87fb-43cf-a916-1946c218ad78-kube-api-access-z84c4\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307135 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpzk9\" (UniqueName: \"kubernetes.io/projected/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-kube-api-access-kpzk9\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307152 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31b17eb1-07a9-4cfb-9589-e45a4ac62791-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307170 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-oauth-config\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307186 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-srv-cert\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307204 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ft89\" (UniqueName: \"kubernetes.io/projected/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-kube-api-access-5ft89\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307220 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82531657-5b20-4b32-a23c-3dbe4370c657-serving-cert\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307237 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4mbg\" (UniqueName: \"kubernetes.io/projected/55243e70-3d3c-44df-ac61-d298330ff633-kube-api-access-l4mbg\") pod \"downloads-7954f5f757-x5x9w\" (UID: \"55243e70-3d3c-44df-ac61-d298330ff633\") " pod="openshift-console/downloads-7954f5f757-x5x9w" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307253 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5k74\" (UniqueName: \"kubernetes.io/projected/c6fda68b-609a-4564-9fd8-ccfd526fa9de-kube-api-access-n5k74\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307283 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-config\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307300 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5787c5c-be3a-43cc-bf49-46573f2b31c1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307318 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhkbp\" (UniqueName: \"kubernetes.io/projected/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-kube-api-access-rhkbp\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307337 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-registry-certificates\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307353 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-config\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307371 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-service-ca\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307390 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-oauth-serving-cert\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307406 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-plugins-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307427 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea188f71-10c4-410b-bcb1-766aa053182d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307453 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-client\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307476 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxgd\" (UniqueName: \"kubernetes.io/projected/5db2bce8-6a97-4593-9780-39b314a116b2-kube-api-access-qhxgd\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307498 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87be7113-65b4-48fc-9c93-a7bbb0bf9136-config-volume\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307528 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvmh\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-kube-api-access-9nvmh\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307547 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307754 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-config\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307771 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-trusted-ca-bundle\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307790 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzfv7\" (UniqueName: \"kubernetes.io/projected/82531657-5b20-4b32-a23c-3dbe4370c657-kube-api-access-tzfv7\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307806 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-mountpoint-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307822 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-csi-data-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307839 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-trusted-ca\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307871 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/33a7299c-87fb-43cf-a916-1946c218ad78-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307887 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-tmpfs\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307903 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6fda68b-609a-4564-9fd8-ccfd526fa9de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307921 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh25b\" (UniqueName: \"kubernetes.io/projected/5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd-kube-api-access-jh25b\") pod \"auto-csr-approver-29557230-trnjq\" (UID: \"5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd\") " pod="openshift-infra/auto-csr-approver-29557230-trnjq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307942 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-registry-tls\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307958 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-serving-cert\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307974 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-trusted-ca\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.307991 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-config\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308007 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-service-ca-bundle\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308023 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-metrics-certs\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308039 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb5e50b8-e1b8-4351-9556-d4da3816791d-proxy-tls\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308054 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea188f71-10c4-410b-bcb1-766aa053182d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308080 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31b17eb1-07a9-4cfb-9589-e45a4ac62791-images\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308096 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308112 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/120ab712-4dde-43e5-8e14-f755accec059-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308129 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzmb\" (UniqueName: \"kubernetes.io/projected/784d49a1-0554-4b42-aa6b-35f4ab0dcc7a-kube-api-access-nrzmb\") pod \"package-server-manager-789f6589d5-cb72p\" (UID: \"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308145 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khptw\" (UniqueName: \"kubernetes.io/projected/f94341bb-1e1c-4a8d-bf68-92658a9c0632-kube-api-access-khptw\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308169 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l924k\" (UniqueName: \"kubernetes.io/projected/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-kube-api-access-l924k\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308188 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5sw\" (UniqueName: \"kubernetes.io/projected/ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1-kube-api-access-rs5sw\") pod \"multus-admission-controller-857f4d67dd-x8hs6\" (UID: \"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308205 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnbh\" (UniqueName: \"kubernetes.io/projected/cf8749c5-afa5-48fa-a7a4-a63a7754e27f-kube-api-access-wjnbh\") pod \"ingress-canary-sxmb7\" (UID: \"cf8749c5-afa5-48fa-a7a4-a63a7754e27f\") " pod="openshift-ingress-canary/ingress-canary-sxmb7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308223 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-service-ca\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308240 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-registration-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308257 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-serving-cert\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308274 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6222f4e-fb93-4b18-a790-7f4affeb8232-node-bootstrap-token\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308290 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kp9b\" (UniqueName: \"kubernetes.io/projected/fb5e50b8-e1b8-4351-9556-d4da3816791d-kube-api-access-8kp9b\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308304 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308319 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6222f4e-fb93-4b18-a790-7f4affeb8232-certs\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308334 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea188f71-10c4-410b-bcb1-766aa053182d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308352 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f94341bb-1e1c-4a8d-bf68-92658a9c0632-signing-key\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308368 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/120ab712-4dde-43e5-8e14-f755accec059-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308384 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhd5\" (UniqueName: \"kubernetes.io/projected/38ba7d36-baaf-4e14-aa8e-5236ee9500de-kube-api-access-qfhd5\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308399 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgr5v\" (UniqueName: \"kubernetes.io/projected/1dab1066-bb46-406d-b993-4e6ca669447f-kube-api-access-zgr5v\") pod \"dns-operator-744455d44c-ttzqw\" (UID: \"1dab1066-bb46-406d-b993-4e6ca669447f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308417 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-apiservice-cert\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308440 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-socket-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308481 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308501 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-serving-cert\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308522 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1dab1066-bb46-406d-b993-4e6ca669447f-metrics-tls\") pod \"dns-operator-744455d44c-ttzqw\" (UID: \"1dab1066-bb46-406d-b993-4e6ca669447f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308541 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6fda68b-609a-4564-9fd8-ccfd526fa9de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.308561 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:07.808540158 +0000 UTC m=+227.824622571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308610 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-secret-volume\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308840 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-config-volume\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308893 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308918 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-service-ca-bundle\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308958 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x8hs6\" (UID: \"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.308985 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0d54d7e-5ec4-46ce-b90e-96e976596cc3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h2sxq\" (UID: \"a0d54d7e-5ec4-46ce-b90e-96e976596cc3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309030 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e879012b-d78a-4309-819f-fa76fc8fdec3-serving-cert\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309057 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33a7299c-87fb-43cf-a916-1946c218ad78-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309079 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309124 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-config\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309193 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309218 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-config\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309224 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-oauth-serving-cert\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309242 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6ts\" (UniqueName: \"kubernetes.io/projected/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-kube-api-access-pm6ts\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309404 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31b17eb1-07a9-4cfb-9589-e45a4ac62791-proxy-tls\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309450 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhgj6\" (UniqueName: \"kubernetes.io/projected/31b17eb1-07a9-4cfb-9589-e45a4ac62791-kube-api-access-nhgj6\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309478 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5787c5c-be3a-43cc-bf49-46573f2b31c1-serving-cert\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309500 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf8749c5-afa5-48fa-a7a4-a63a7754e27f-cert\") pod \"ingress-canary-sxmb7\" (UID: \"cf8749c5-afa5-48fa-a7a4-a63a7754e27f\") " pod="openshift-ingress-canary/ingress-canary-sxmb7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309548 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-default-certificate\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309574 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qb7g\" (UniqueName: \"kubernetes.io/projected/a0d54d7e-5ec4-46ce-b90e-96e976596cc3-kube-api-access-8qb7g\") pod \"control-plane-machine-set-operator-78cbb6b69f-h2sxq\" (UID: \"a0d54d7e-5ec4-46ce-b90e-96e976596cc3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309617 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2zx\" (UniqueName: \"kubernetes.io/projected/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-kube-api-access-2h2zx\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.309655 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-bound-sa-token\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.310320 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31b17eb1-07a9-4cfb-9589-e45a4ac62791-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.310922 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-config\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.311144 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb5e50b8-e1b8-4351-9556-d4da3816791d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.311397 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31b17eb1-07a9-4cfb-9589-e45a4ac62791-images\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.312809 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-service-ca\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.313050 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/120ab712-4dde-43e5-8e14-f755accec059-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.314596 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-config\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.315610 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-ca\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.318251 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5787c5c-be3a-43cc-bf49-46573f2b31c1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.322182 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-config\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.323476 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-tmpfs\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.324331 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/120ab712-4dde-43e5-8e14-f755accec059-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.325000 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.326111 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6fda68b-609a-4564-9fd8-ccfd526fa9de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.326757 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-trusted-ca-bundle\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.327200 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.328086 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-registry-certificates\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.328557 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-config\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.329366 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-service-ca-bundle\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.330300 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a7299c-87fb-43cf-a916-1946c218ad78-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.330657 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:07.830641383 +0000 UTC m=+227.846723786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.331204 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-apiservice-cert\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.331553 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-trusted-ca\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.332029 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e879012b-d78a-4309-819f-fa76fc8fdec3-config\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.333266 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-service-ca\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.335095 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-trusted-ca\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.335633 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-service-ca-bundle\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.339252 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31b17eb1-07a9-4cfb-9589-e45a4ac62791-proxy-tls\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.340545 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-stats-auth\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.342456 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e879012b-d78a-4309-819f-fa76fc8fdec3-serving-cert\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.344823 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-registry-tls\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.345278 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-serving-cert\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.345534 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-serving-cert\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.346143 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x8hs6\" (UID: \"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.351418 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5787c5c-be3a-43cc-bf49-46573f2b31c1-serving-cert\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.360551 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6fda68b-609a-4564-9fd8-ccfd526fa9de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.362983 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1dab1066-bb46-406d-b993-4e6ca669447f-metrics-tls\") pod \"dns-operator-744455d44c-ttzqw\" (UID: \"1dab1066-bb46-406d-b993-4e6ca669447f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.365661 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.365960 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-default-certificate\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.366585 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-webhook-cert\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.366658 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb5e50b8-e1b8-4351-9556-d4da3816791d-proxy-tls\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.367130 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-oauth-config\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.367188 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/33a7299c-87fb-43cf-a916-1946c218ad78-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.367191 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82531657-5b20-4b32-a23c-3dbe4370c657-etcd-client\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.367662 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82531657-5b20-4b32-a23c-3dbe4370c657-serving-cert\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.367686 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-metrics-certs\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.367833 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.384103 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/784d49a1-0554-4b42-aa6b-35f4ab0dcc7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cb72p\" (UID: \"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.391868 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-bound-sa-token\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.403781 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5k74\" (UniqueName: \"kubernetes.io/projected/c6fda68b-609a-4564-9fd8-ccfd526fa9de-kube-api-access-n5k74\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7vv8\" (UID: \"c6fda68b-609a-4564-9fd8-ccfd526fa9de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.410684 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.410845 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-socket-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.410891 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-serving-cert\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.410914 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-secret-volume\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.410932 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-config-volume\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.410949 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0d54d7e-5ec4-46ce-b90e-96e976596cc3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h2sxq\" (UID: \"a0d54d7e-5ec4-46ce-b90e-96e976596cc3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.410978 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411010 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6ts\" (UniqueName: \"kubernetes.io/projected/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-kube-api-access-pm6ts\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411034 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf8749c5-afa5-48fa-a7a4-a63a7754e27f-cert\") pod \"ingress-canary-sxmb7\" (UID: \"cf8749c5-afa5-48fa-a7a4-a63a7754e27f\") " pod="openshift-ingress-canary/ingress-canary-sxmb7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411052 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qb7g\" (UniqueName: \"kubernetes.io/projected/a0d54d7e-5ec4-46ce-b90e-96e976596cc3-kube-api-access-8qb7g\") pod \"control-plane-machine-set-operator-78cbb6b69f-h2sxq\" (UID: \"a0d54d7e-5ec4-46ce-b90e-96e976596cc3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411072 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2zx\" (UniqueName: \"kubernetes.io/projected/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-kube-api-access-2h2zx\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411107 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnvq\" (UniqueName: \"kubernetes.io/projected/e6222f4e-fb93-4b18-a790-7f4affeb8232-kube-api-access-bbnvq\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411130 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f94341bb-1e1c-4a8d-bf68-92658a9c0632-signing-cabundle\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411154 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqj4h\" (UniqueName: \"kubernetes.io/projected/87be7113-65b4-48fc-9c93-a7bbb0bf9136-kube-api-access-xqj4h\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411176 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87be7113-65b4-48fc-9c93-a7bbb0bf9136-metrics-tls\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411204 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-srv-cert\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411235 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhkbp\" (UniqueName: \"kubernetes.io/projected/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-kube-api-access-rhkbp\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411254 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-plugins-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411276 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea188f71-10c4-410b-bcb1-766aa053182d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411301 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhxgd\" (UniqueName: \"kubernetes.io/projected/5db2bce8-6a97-4593-9780-39b314a116b2-kube-api-access-qhxgd\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411317 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87be7113-65b4-48fc-9c93-a7bbb0bf9136-config-volume\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411346 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-mountpoint-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411363 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-csi-data-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411393 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh25b\" (UniqueName: \"kubernetes.io/projected/5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd-kube-api-access-jh25b\") pod \"auto-csr-approver-29557230-trnjq\" (UID: \"5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd\") " pod="openshift-infra/auto-csr-approver-29557230-trnjq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411431 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-config\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411449 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea188f71-10c4-410b-bcb1-766aa053182d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411487 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khptw\" (UniqueName: \"kubernetes.io/projected/f94341bb-1e1c-4a8d-bf68-92658a9c0632-kube-api-access-khptw\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411525 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjnbh\" (UniqueName: \"kubernetes.io/projected/cf8749c5-afa5-48fa-a7a4-a63a7754e27f-kube-api-access-wjnbh\") pod \"ingress-canary-sxmb7\" (UID: \"cf8749c5-afa5-48fa-a7a4-a63a7754e27f\") " pod="openshift-ingress-canary/ingress-canary-sxmb7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411586 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-registration-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411606 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6222f4e-fb93-4b18-a790-7f4affeb8232-node-bootstrap-token\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411624 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea188f71-10c4-410b-bcb1-766aa053182d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411640 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f94341bb-1e1c-4a8d-bf68-92658a9c0632-signing-key\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.411663 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6222f4e-fb93-4b18-a790-7f4affeb8232-certs\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.412976 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnbf\" (UniqueName: \"kubernetes.io/projected/067c1734-d7ab-4e50-b020-1b65f0350169-kube-api-access-zfnbf\") pod \"migrator-59844c95c7-ljj46\" (UID: \"067c1734-d7ab-4e50-b020-1b65f0350169\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.414256 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-plugins-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.414713 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc9ea66-cfec-47f9-a106-f8ad7c0a162c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gb8zr\" (UID: \"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.414899 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-registration-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.415141 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-config\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.416320 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpzk9\" (UniqueName: \"kubernetes.io/projected/4651f8d9-7a8f-4740-b31b-0bf0e77cb135-kube-api-access-kpzk9\") pod \"packageserver-d55dfcdfc-89hjw\" (UID: \"4651f8d9-7a8f-4740-b31b-0bf0e77cb135\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.417564 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87be7113-65b4-48fc-9c93-a7bbb0bf9136-config-volume\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.417846 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-config-volume\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.417926 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-mountpoint-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.417978 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-csi-data-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.418997 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea188f71-10c4-410b-bcb1-766aa053182d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.419612 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5db2bce8-6a97-4593-9780-39b314a116b2-socket-dir\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.420128 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6222f4e-fb93-4b18-a790-7f4affeb8232-certs\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.420274 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f94341bb-1e1c-4a8d-bf68-92658a9c0632-signing-cabundle\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.421114 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:07.921096577 +0000 UTC m=+227.937178980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.430762 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea188f71-10c4-410b-bcb1-766aa053182d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.432675 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52qdd\" (UniqueName: \"kubernetes.io/projected/e879012b-d78a-4309-819f-fa76fc8fdec3-kube-api-access-52qdd\") pod \"authentication-operator-69f744f599-nwl2k\" (UID: \"e879012b-d78a-4309-819f-fa76fc8fdec3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.433319 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0d54d7e-5ec4-46ce-b90e-96e976596cc3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h2sxq\" (UID: \"a0d54d7e-5ec4-46ce-b90e-96e976596cc3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.436818 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84c4\" (UniqueName: \"kubernetes.io/projected/33a7299c-87fb-43cf-a916-1946c218ad78-kube-api-access-z84c4\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.443232 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.448382 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5sw\" (UniqueName: \"kubernetes.io/projected/ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1-kube-api-access-rs5sw\") pod \"multus-admission-controller-857f4d67dd-x8hs6\" (UID: \"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.449400 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf8749c5-afa5-48fa-a7a4-a63a7754e27f-cert\") pod \"ingress-canary-sxmb7\" (UID: \"cf8749c5-afa5-48fa-a7a4-a63a7754e27f\") " pod="openshift-ingress-canary/ingress-canary-sxmb7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.450241 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6222f4e-fb93-4b18-a790-7f4affeb8232-node-bootstrap-token\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.450279 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-serving-cert\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.450311 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzmb\" (UniqueName: \"kubernetes.io/projected/784d49a1-0554-4b42-aa6b-35f4ab0dcc7a-kube-api-access-nrzmb\") pod \"package-server-manager-789f6589d5-cb72p\" (UID: \"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.450649 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.451291 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-secret-volume\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.451306 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f94341bb-1e1c-4a8d-bf68-92658a9c0632-signing-key\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.451481 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l924k\" (UniqueName: \"kubernetes.io/projected/45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7-kube-api-access-l924k\") pod \"router-default-5444994796-h2jnz\" (UID: \"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7\") " pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.453348 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-srv-cert\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.454536 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.454607 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87be7113-65b4-48fc-9c93-a7bbb0bf9136-metrics-tls\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.458685 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ft89\" (UniqueName: \"kubernetes.io/projected/c373b1ca-aaa6-4ee3-b8c3-769d43586a03-kube-api-access-5ft89\") pod \"console-operator-58897d9998-bdvmc\" (UID: \"c373b1ca-aaa6-4ee3-b8c3-769d43586a03\") " pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.469143 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.477567 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.486288 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvmh\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-kube-api-access-9nvmh\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.488289 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.492125 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhd5\" (UniqueName: \"kubernetes.io/projected/38ba7d36-baaf-4e14-aa8e-5236ee9500de-kube-api-access-qfhd5\") pod \"console-f9d7485db-rvlhd\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.500410 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.512461 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.513173 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.013135093 +0000 UTC m=+228.029217696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.516977 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgr5v\" (UniqueName: \"kubernetes.io/projected/1dab1066-bb46-406d-b993-4e6ca669447f-kube-api-access-zgr5v\") pod \"dns-operator-744455d44c-ttzqw\" (UID: \"1dab1066-bb46-406d-b993-4e6ca669447f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.536692 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33a7299c-87fb-43cf-a916-1946c218ad78-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vhh4f\" (UID: \"33a7299c-87fb-43cf-a916-1946c218ad78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.547732 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.563061 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.571426 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.584465 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4mbg\" (UniqueName: \"kubernetes.io/projected/55243e70-3d3c-44df-ac61-d298330ff633-kube-api-access-l4mbg\") pod \"downloads-7954f5f757-x5x9w\" (UID: \"55243e70-3d3c-44df-ac61-d298330ff633\") " pod="openshift-console/downloads-7954f5f757-x5x9w" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.586084 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzfv7\" (UniqueName: \"kubernetes.io/projected/82531657-5b20-4b32-a23c-3dbe4370c657-kube-api-access-tzfv7\") pod \"etcd-operator-b45778765-v97fz\" (UID: \"82531657-5b20-4b32-a23c-3dbe4370c657\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.609276 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhgj6\" (UniqueName: \"kubernetes.io/projected/31b17eb1-07a9-4cfb-9589-e45a4ac62791-kube-api-access-nhgj6\") pod \"machine-config-operator-74547568cd-s5v2t\" (UID: \"31b17eb1-07a9-4cfb-9589-e45a4ac62791\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.613593 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.614441 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.114416938 +0000 UTC m=+228.130499341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.618556 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2r42\" (UniqueName: \"kubernetes.io/projected/c5787c5c-be3a-43cc-bf49-46573f2b31c1-kube-api-access-s2r42\") pod \"openshift-config-operator-7777fb866f-ncp4l\" (UID: \"c5787c5c-be3a-43cc-bf49-46573f2b31c1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.643440 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drtfq\" (UniqueName: \"kubernetes.io/projected/8a1ea22d-3be3-412d-be38-ab360aae90e5-kube-api-access-drtfq\") pod \"marketplace-operator-79b997595-zb64j\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.667261 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kp9b\" (UniqueName: \"kubernetes.io/projected/fb5e50b8-e1b8-4351-9556-d4da3816791d-kube-api-access-8kp9b\") pod \"machine-config-controller-84d6567774-8bsws\" (UID: \"fb5e50b8-e1b8-4351-9556-d4da3816791d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.690701 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhkbp\" (UniqueName: \"kubernetes.io/projected/2e1618f0-bd7b-48fb-aeed-213d80e0c1e7-kube-api-access-rhkbp\") pod \"service-ca-operator-777779d784-t8qbl\" (UID: \"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.697139 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.704597 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.712335 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.715204 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.715596 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.215585131 +0000 UTC m=+228.231667534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.717249 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.733732 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.745666 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea188f71-10c4-410b-bcb1-766aa053182d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f45bw\" (UID: \"ea188f71-10c4-410b-bcb1-766aa053182d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.746869 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.749441 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.750554 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhxgd\" (UniqueName: \"kubernetes.io/projected/5db2bce8-6a97-4593-9780-39b314a116b2-kube-api-access-qhxgd\") pod \"csi-hostpathplugin-rjjb9\" (UID: \"5db2bce8-6a97-4593-9780-39b314a116b2\") " pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.773480 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.778281 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" event={"ID":"e9f4273c-6ab2-48dd-af0c-f6f03b91d037","Type":"ContainerStarted","Data":"0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.779773 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.790279 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh25b\" (UniqueName: \"kubernetes.io/projected/5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd-kube-api-access-jh25b\") pod \"auto-csr-approver-29557230-trnjq\" (UID: \"5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd\") " pod="openshift-infra/auto-csr-approver-29557230-trnjq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.791729 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6ts\" (UniqueName: \"kubernetes.io/projected/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-kube-api-access-pm6ts\") pod \"collect-profiles-29557230-z7qq7\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.794354 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.805140 5029 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sl427 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.805216 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" podUID="e9f4273c-6ab2-48dd-af0c-f6f03b91d037" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.813433 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khptw\" (UniqueName: \"kubernetes.io/projected/f94341bb-1e1c-4a8d-bf68-92658a9c0632-kube-api-access-khptw\") pod \"service-ca-9c57cc56f-dg27c\" (UID: \"f94341bb-1e1c-4a8d-bf68-92658a9c0632\") " pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.815482 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-x5x9w" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.816753 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.818725 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.318683365 +0000 UTC m=+228.334765908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.839054 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qb7g\" (UniqueName: \"kubernetes.io/projected/a0d54d7e-5ec4-46ce-b90e-96e976596cc3-kube-api-access-8qb7g\") pod \"control-plane-machine-set-operator-78cbb6b69f-h2sxq\" (UID: \"a0d54d7e-5ec4-46ce-b90e-96e976596cc3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.839286 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.863275 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2zx\" (UniqueName: \"kubernetes.io/projected/bdc59b31-dc24-48fe-ba01-865f51aaf2cc-kube-api-access-2h2zx\") pod \"olm-operator-6b444d44fb-mrx25\" (UID: \"bdc59b31-dc24-48fe-ba01-865f51aaf2cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.869317 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnvq\" (UniqueName: \"kubernetes.io/projected/e6222f4e-fb93-4b18-a790-7f4affeb8232-kube-api-access-bbnvq\") pod \"machine-config-server-rvqxj\" (UID: \"e6222f4e-fb93-4b18-a790-7f4affeb8232\") " pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.875600 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjnbh\" (UniqueName: \"kubernetes.io/projected/cf8749c5-afa5-48fa-a7a4-a63a7754e27f-kube-api-access-wjnbh\") pod \"ingress-canary-sxmb7\" (UID: \"cf8749c5-afa5-48fa-a7a4-a63a7754e27f\") " pod="openshift-ingress-canary/ingress-canary-sxmb7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.891446 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" event={"ID":"544fe537-df82-45eb-932c-89a3387540e3","Type":"ContainerStarted","Data":"8d904af6e37602ea594f46253de3a86103b4f4054302005a7afc96073a2e1314"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.891492 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" event={"ID":"544fe537-df82-45eb-932c-89a3387540e3","Type":"ContainerStarted","Data":"c2e20560dc746e3ff3cde6788f50a139bacad1d899de88cc2f7d0e3f0f3701f5"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.893210 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.897360 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" event={"ID":"1d41501e-682f-47d2-867d-fa61bd7e4bf1","Type":"ContainerStarted","Data":"6586c94f672bb508c7e044eba85849214c670ed6696d742d316695711dea46ee"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.897486 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" event={"ID":"1d41501e-682f-47d2-867d-fa61bd7e4bf1","Type":"ContainerStarted","Data":"d0367a142104a9fc23c4a21906916d8f786e575b250fa4e5992e467af61d752d"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.903352 5029 generic.go:334] "Generic (PLEG): container finished" podID="e6046521-c7e4-4f5d-b5ad-81e436fe2d1f" containerID="ed9a6ab211b0028c034ff284601044d92ae71c8fcaf6c3c30d74d2cc3c9d1404" exitCode=0 Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.903646 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" event={"ID":"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f","Type":"ContainerDied","Data":"ed9a6ab211b0028c034ff284601044d92ae71c8fcaf6c3c30d74d2cc3c9d1404"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.903691 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" event={"ID":"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f","Type":"ContainerStarted","Data":"6f4d8bd5073de427e6a6ff3db4cdf1b93f9b3b4eeb04c69111fc9be430663683"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.918019 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:07 crc kubenswrapper[5029]: E0313 20:31:07.918498 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.41848013 +0000 UTC m=+228.434562533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.920586 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.941947 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.943185 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqj4h\" (UniqueName: \"kubernetes.io/projected/87be7113-65b4-48fc-9c93-a7bbb0bf9136-kube-api-access-xqj4h\") pod \"dns-default-dh52p\" (UID: \"87be7113-65b4-48fc-9c93-a7bbb0bf9136\") " pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.945056 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" event={"ID":"68f8cfe3-1b3f-4145-9060-bc1c70762016","Type":"ContainerStarted","Data":"b1e5408904cea86f1b6f63e27da130d7e7f6963d87cfbfb3be27e92883443e60"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.945097 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" event={"ID":"68f8cfe3-1b3f-4145-9060-bc1c70762016","Type":"ContainerStarted","Data":"29e1c71b7aeebe76014a1bc8716309c0a35cb842e7afa9f1572900f24e966b1f"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.946442 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h2jnz" event={"ID":"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7","Type":"ContainerStarted","Data":"762fe95e90a0161278ef68e0ab911ecdd69f692c87d56cfae3e2b11741ecfa60"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.953953 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" event={"ID":"230fc7d2-389f-45a1-b610-a10fb92b8796","Type":"ContainerStarted","Data":"1bcf426f4fdf8eb955a5fd87a010f33020fc52c43127047bce8d645c46f1550d"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.954017 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" event={"ID":"230fc7d2-389f-45a1-b610-a10fb92b8796","Type":"ContainerStarted","Data":"a62d702c5c48faa84175a7cead553b4b360837b8bba998b4adf1465711cdf906"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.963071 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557230-trnjq" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.972768 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.984277 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.984613 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" event={"ID":"de7331b0-d805-4b94-909a-61de2cb70ce1","Type":"ContainerStarted","Data":"60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.984722 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" event={"ID":"de7331b0-d805-4b94-909a-61de2cb70ce1","Type":"ContainerStarted","Data":"9b9b09fa16a22d71465a9534a9ed7e3e73c1d00ef7cc88e42ef0b55f8f1e699a"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.985011 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.996254 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" event={"ID":"edba0b8e-1343-45d0-a37f-23ed39bfddab","Type":"ContainerStarted","Data":"7029d2b90216e556bd87c6afe829f62310fd579629317d46edc3cb503a5f10f3"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.996305 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" event={"ID":"edba0b8e-1343-45d0-a37f-23ed39bfddab","Type":"ContainerStarted","Data":"87b612e75a0fb0a25440b725b27ef2c3439df85db5dc59854a6d3c345881e7db"} Mar 13 20:31:07 crc kubenswrapper[5029]: I0313 20:31:07.997270 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.009362 5029 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zkjm5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.009443 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" podUID="de7331b0-d805-4b94-909a-61de2cb70ce1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.009781 5029 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cchj6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.009821 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" podUID="edba0b8e-1343-45d0-a37f-23ed39bfddab" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.020755 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.020889 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.520864456 +0000 UTC m=+228.536946859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.023597 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.024562 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.524536904 +0000 UTC m=+228.540619307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.029636 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" event={"ID":"7e26e65c-4cb6-4094-b92b-9b4e0b36253b","Type":"ContainerStarted","Data":"aa18fc3e4835befe2ef6808ebc5c961b04b89978b8a2521d6c1e600858dc4103"} Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.029738 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" event={"ID":"7e26e65c-4cb6-4094-b92b-9b4e0b36253b","Type":"ContainerStarted","Data":"674336aff1c49c9b29842b07da24689f933d48a5163b53dd2036f22b57b1c906"} Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.041289 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rvqxj" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.053731 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" event={"ID":"573fa1e5-a683-4cd2-a3d6-037732c07f53","Type":"ContainerStarted","Data":"77aedd05fe2d287339039661fd575b613eedd713d00a87fa3333de2265ec77ce"} Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.053804 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" event={"ID":"573fa1e5-a683-4cd2-a3d6-037732c07f53","Type":"ContainerStarted","Data":"2d1a5bf596d0f54c5887b9cda6b33a979b199e4a8de4082f882d903acbf1ca60"} Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.064456 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.067749 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw"] Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.091646 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sxmb7" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.130818 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.133315 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.63328304 +0000 UTC m=+228.649365443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.138256 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.141558 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.641541163 +0000 UTC m=+228.657623566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.243401 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.244406 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.744365529 +0000 UTC m=+228.760448092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.248290 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.249359 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.749336273 +0000 UTC m=+228.765418676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.338179 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x8hs6"] Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.361727 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.362120 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.862098456 +0000 UTC m=+228.878180859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.370016 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p"] Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.418582 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr"] Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.425053 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rvlhd"] Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.464617 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.470212 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:08.970185345 +0000 UTC m=+228.986267748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.566574 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.566754 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.066724642 +0000 UTC m=+229.082807045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.567059 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.567616 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.067599097 +0000 UTC m=+229.083681490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.669355 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.669785 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.169765025 +0000 UTC m=+229.185847428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.670745 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.672670 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.172648753 +0000 UTC m=+229.188731166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.710673 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" podStartSLOduration=178.710649056 podStartE2EDuration="2m58.710649056s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:08.705709772 +0000 UTC m=+228.721792175" watchObservedRunningTime="2026-03-13 20:31:08.710649056 +0000 UTC m=+228.726731459" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.748732 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8wtl" podStartSLOduration=179.748706799 podStartE2EDuration="2m59.748706799s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:08.747348743 +0000 UTC m=+228.763431156" watchObservedRunningTime="2026-03-13 20:31:08.748706799 +0000 UTC m=+228.764789212" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.748902 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bdvmc"] Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.748955 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ttzqw"] Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.748966 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nwl2k"] Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.774309 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.774590 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.274566556 +0000 UTC m=+229.290648959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.885611 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.886176 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.386158497 +0000 UTC m=+229.402240900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.908224 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" podStartSLOduration=178.908207411 podStartE2EDuration="2m58.908207411s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:08.884927475 +0000 UTC m=+228.901009898" watchObservedRunningTime="2026-03-13 20:31:08.908207411 +0000 UTC m=+228.924289804" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.933253 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-695v5" podStartSLOduration=179.933232644 podStartE2EDuration="2m59.933232644s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:08.930663695 +0000 UTC m=+228.946746098" watchObservedRunningTime="2026-03-13 20:31:08.933232644 +0000 UTC m=+228.949315047" Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.986968 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.987143 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.487114604 +0000 UTC m=+229.503197007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:08 crc kubenswrapper[5029]: I0313 20:31:08.988199 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:08 crc kubenswrapper[5029]: E0313 20:31:08.988370 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.488363118 +0000 UTC m=+229.504445521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.070542 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-djlcm" podStartSLOduration=180.070522588 podStartE2EDuration="3m0.070522588s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:09.069272325 +0000 UTC m=+229.085354758" watchObservedRunningTime="2026-03-13 20:31:09.070522588 +0000 UTC m=+229.086604991" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.086752 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rvlhd" event={"ID":"38ba7d36-baaf-4e14-aa8e-5236ee9500de","Type":"ContainerStarted","Data":"e24525bc8ca3304d6e55337e1af0ff2f8d2b7b55fd479a58d1e572b16ce9caa6"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.089447 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.090037 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.590020763 +0000 UTC m=+229.606103156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.114689 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" event={"ID":"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a","Type":"ContainerStarted","Data":"4036305d95e4059b6fec614a39b676e9c76a086b4d0bd3db8a68d3ef056dfedc"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.121428 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" podStartSLOduration=180.121411238 podStartE2EDuration="3m0.121411238s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:09.120722659 +0000 UTC m=+229.136805082" watchObservedRunningTime="2026-03-13 20:31:09.121411238 +0000 UTC m=+229.137493641" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.121704 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" event={"ID":"4651f8d9-7a8f-4740-b31b-0bf0e77cb135","Type":"ContainerStarted","Data":"35e31defacdfd14616ad885ba753f1add1822027314e7d152e47f6eba1b42e70"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.144204 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mmwnc" event={"ID":"7e26e65c-4cb6-4094-b92b-9b4e0b36253b","Type":"ContainerStarted","Data":"e33edf9a220c9685d54211f2124c5a20fd8df50a28afea164a31ee4f473c1343"} Mar 13 20:31:09 crc kubenswrapper[5029]: W0313 20:31:09.152455 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dab1066_bb46_406d_b993_4e6ca669447f.slice/crio-551e6484566da18850e1a65b5bfdf0f105c087d14a77af90bc41a73175dcdacc WatchSource:0}: Error finding container 551e6484566da18850e1a65b5bfdf0f105c087d14a77af90bc41a73175dcdacc: Status 404 returned error can't find the container with id 551e6484566da18850e1a65b5bfdf0f105c087d14a77af90bc41a73175dcdacc Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.157206 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" podStartSLOduration=179.157190241 podStartE2EDuration="2m59.157190241s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:09.152879865 +0000 UTC m=+229.168962258" watchObservedRunningTime="2026-03-13 20:31:09.157190241 +0000 UTC m=+229.173272644" Mar 13 20:31:09 crc kubenswrapper[5029]: W0313 20:31:09.166010 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode879012b_d78a_4309_819f_fa76fc8fdec3.slice/crio-f5a5cde7f9b44d5cac9efa5fea79d0a188580890779ec8651d81b76cdddb2416 WatchSource:0}: Error finding container f5a5cde7f9b44d5cac9efa5fea79d0a188580890779ec8651d81b76cdddb2416: Status 404 returned error can't find the container with id f5a5cde7f9b44d5cac9efa5fea79d0a188580890779ec8651d81b76cdddb2416 Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.173719 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bdvmc" event={"ID":"c373b1ca-aaa6-4ee3-b8c3-769d43586a03","Type":"ContainerStarted","Data":"753879843ac9b400ca065ef66cd3fdf83e97aee837637dd1b689540881e8caaf"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.192628 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.193223 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.69320805 +0000 UTC m=+229.709290453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.221236 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h2jnz" event={"ID":"45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7","Type":"ContainerStarted","Data":"d1d010ca6097fe3c9264aacdeb5a332126004b63e9f3ad222e1db77ca37239d0"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.255548 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rvqxj" event={"ID":"e6222f4e-fb93-4b18-a790-7f4affeb8232","Type":"ContainerStarted","Data":"ef6d9dc49029cb7f505dfc6d61a6b5e3f8547a0c4e24d05caa8b376009004853"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.258098 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" event={"ID":"1d41501e-682f-47d2-867d-fa61bd7e4bf1","Type":"ContainerStarted","Data":"42867469addd72f6b361bb2e1d731b265d9ed22e31726373e05793719225a162"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.284763 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqdpc" podStartSLOduration=180.284747423 podStartE2EDuration="3m0.284747423s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:09.284119986 +0000 UTC m=+229.300202389" watchObservedRunningTime="2026-03-13 20:31:09.284747423 +0000 UTC m=+229.300829816" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.294085 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.294464 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.794440814 +0000 UTC m=+229.810523217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.289211 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" event={"ID":"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c","Type":"ContainerStarted","Data":"feea2f3377fce130a86a9f93217dc2db269ea49d92110d79bfa7ee7e27f6ab0e"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.297936 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" event={"ID":"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1","Type":"ContainerStarted","Data":"6b45517371af114932de6a3b9a4cfd05b8ca4fbae4e6c84a526c4b8258426771"} Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.319357 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.324625 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jwhw8" podStartSLOduration=179.324609966 podStartE2EDuration="2m59.324609966s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:09.32293284 +0000 UTC m=+229.339015253" watchObservedRunningTime="2026-03-13 20:31:09.324609966 +0000 UTC m=+229.340692359" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.328638 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cchj6" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.398826 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.401349 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:09.9013342 +0000 UTC m=+229.917416593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.432542 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.479167 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.480198 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.480377 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.526555 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.527389 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.027363801 +0000 UTC m=+230.043446204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.531035 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.531595 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.031577305 +0000 UTC m=+230.047659708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.604423 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" podStartSLOduration=179.604383083 podStartE2EDuration="2m59.604383083s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:09.601503206 +0000 UTC m=+229.617585609" watchObservedRunningTime="2026-03-13 20:31:09.604383083 +0000 UTC m=+229.620465486" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.625822 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8"] Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.634274 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.634583 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.634650 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.634712 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.634747 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.636100 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.136059675 +0000 UTC m=+230.152142078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.646408 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.661931 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.662305 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zb64j"] Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.676334 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.677987 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.730921 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" podStartSLOduration=180.730893887 podStartE2EDuration="3m0.730893887s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:09.715367679 +0000 UTC m=+229.731450082" watchObservedRunningTime="2026-03-13 20:31:09.730893887 +0000 UTC m=+229.746976310" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.739605 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.739718 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.740639 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.240593518 +0000 UTC m=+230.256676101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.750624 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a301620b-657c-46c0-a1a4-f7774e38f273-metrics-certs\") pod \"network-metrics-daemon-frlln\" (UID: \"a301620b-657c-46c0-a1a4-f7774e38f273\") " pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.773077 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58644: no serving certificate available for the kubelet" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.841267 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.841831 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.341813192 +0000 UTC m=+230.357895595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.886611 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58654: no serving certificate available for the kubelet" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.925191 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-frlln" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.925236 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.932838 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.942061 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.948108 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:09 crc kubenswrapper[5029]: E0313 20:31:09.948503 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.448487371 +0000 UTC m=+230.464569774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:09 crc kubenswrapper[5029]: I0313 20:31:09.982556 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58658: no serving certificate available for the kubelet" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.051865 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.052274 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.552251133 +0000 UTC m=+230.568333536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.085627 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.085681 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.086095 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t2twj" podStartSLOduration=180.086063283 podStartE2EDuration="3m0.086063283s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:10.074772629 +0000 UTC m=+230.090855042" watchObservedRunningTime="2026-03-13 20:31:10.086063283 +0000 UTC m=+230.102145686" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.097843 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58666: no serving certificate available for the kubelet" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.106792 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.153724 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.154013 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.6540017 +0000 UTC m=+230.670084103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.179490 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58668: no serving certificate available for the kubelet" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.236633 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h2jnz" podStartSLOduration=180.236597983 podStartE2EDuration="3m0.236597983s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:10.164828902 +0000 UTC m=+230.180911325" watchObservedRunningTime="2026-03-13 20:31:10.236597983 +0000 UTC m=+230.252680406" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.270795 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.272331 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.772302373 +0000 UTC m=+230.788384776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.290616 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58678: no serving certificate available for the kubelet" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.365382 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" event={"ID":"1dab1066-bb46-406d-b993-4e6ca669447f","Type":"ContainerStarted","Data":"21cdaae2fc31e315a3a4822c62b160644d50a6f528680f33b871b90021719d1f"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.365461 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" event={"ID":"1dab1066-bb46-406d-b993-4e6ca669447f","Type":"ContainerStarted","Data":"551e6484566da18850e1a65b5bfdf0f105c087d14a77af90bc41a73175dcdacc"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.373138 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.373512 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.873495356 +0000 UTC m=+230.889577759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.400808 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58692: no serving certificate available for the kubelet" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.405297 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" event={"ID":"8a1ea22d-3be3-412d-be38-ab360aae90e5","Type":"ContainerStarted","Data":"1b859d691e2541cead23e858d082af7ff17f780bcfb0300cf7f19db2adff416a"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.432402 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" event={"ID":"2dc9ea66-cfec-47f9-a106-f8ad7c0a162c","Type":"ContainerStarted","Data":"2ed949706e1ac8bb5ef819f1bde17000d41f080d9999b32a9d2985395e2b70e4"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.463627 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" event={"ID":"4651f8d9-7a8f-4740-b31b-0bf0e77cb135","Type":"ContainerStarted","Data":"e2b6770f2f11af45079fb77c8efce26f4e6137738fd53dd7a072c1af6bfe3fb6"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.465198 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.476840 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.484012 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:10.98397181 +0000 UTC m=+231.000054213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.493365 5029 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-89hjw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.493448 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" podUID="4651f8d9-7a8f-4740-b31b-0bf0e77cb135" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.500463 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:10 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:10 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:10 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.500560 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.513451 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gb8zr" podStartSLOduration=180.513423221 podStartE2EDuration="3m0.513423221s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:10.476428986 +0000 UTC m=+230.492511399" watchObservedRunningTime="2026-03-13 20:31:10.513423221 +0000 UTC m=+230.529505624" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.520324 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" podStartSLOduration=180.515619391 podStartE2EDuration="3m0.515619391s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:10.513564405 +0000 UTC m=+230.529646808" watchObservedRunningTime="2026-03-13 20:31:10.515619391 +0000 UTC m=+230.531701804" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.529017 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" event={"ID":"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1","Type":"ContainerStarted","Data":"6147625627d2ad38a560f3f3245f9d00e71e5730d2ad4c34ad0dfa16fac908ef"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.547342 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" event={"ID":"e879012b-d78a-4309-819f-fa76fc8fdec3","Type":"ContainerStarted","Data":"d83905dbbeb306563ea3c5399ced591b838ed2b2d9cb44965c230183c5419ceb"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.547389 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" event={"ID":"e879012b-d78a-4309-819f-fa76fc8fdec3","Type":"ContainerStarted","Data":"f5a5cde7f9b44d5cac9efa5fea79d0a188580890779ec8651d81b76cdddb2416"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.558076 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v97fz"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.560766 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qz4wv"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.560919 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58694: no serving certificate available for the kubelet" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.586457 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.610745 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.613933 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.113888525 +0000 UTC m=+231.129970928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.616648 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nwl2k" podStartSLOduration=181.616626098 podStartE2EDuration="3m1.616626098s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:10.585006528 +0000 UTC m=+230.601088951" watchObservedRunningTime="2026-03-13 20:31:10.616626098 +0000 UTC m=+230.632708501" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.621490 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.631423 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rvqxj" podStartSLOduration=6.631385305 podStartE2EDuration="6.631385305s" podCreationTimestamp="2026-03-13 20:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:10.625639001 +0000 UTC m=+230.641721414" watchObservedRunningTime="2026-03-13 20:31:10.631385305 +0000 UTC m=+230.647467698" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.692321 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.693347 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.193321933 +0000 UTC m=+231.209404336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.694008 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66klh\" (UniqueName: \"kubernetes.io/projected/e2f9d5d5-9771-4294-961f-110aa2430e29-kube-api-access-66klh\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.694072 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-utilities\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.694090 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-catalog-content\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.694174 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.696929 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.196909519 +0000 UTC m=+231.212991922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.777775 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rvqxj" event={"ID":"e6222f4e-fb93-4b18-a790-7f4affeb8232","Type":"ContainerStarted","Data":"b49efcefb290f7085e76c8dc87bb16c8d6e89dcb77a647978758690ffee27a5e"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.777822 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" event={"ID":"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f","Type":"ContainerStarted","Data":"73b18d1e0e682663e65d7f590f234f9b9dd3d9db01472f0b9cc57476f167e0c7"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.777879 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rvlhd" event={"ID":"38ba7d36-baaf-4e14-aa8e-5236ee9500de","Type":"ContainerStarted","Data":"0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.777900 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz4wv"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.777929 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.777944 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.777957 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kl2lj"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.779210 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rjjb9"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.779427 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.782547 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 13 20:31:10 crc kubenswrapper[5029]: W0313 20:31:10.784511 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db2bce8_6a97_4593_9780_39b314a116b2.slice/crio-60c145ff53375c1457dd43aa33ddad3438cab0356334db81125828e67ffb9772 WatchSource:0}: Error finding container 60c145ff53375c1457dd43aa33ddad3438cab0356334db81125828e67ffb9772: Status 404 returned error can't find the container with id 60c145ff53375c1457dd43aa33ddad3438cab0356334db81125828e67ffb9772 Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.786917 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" event={"ID":"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a","Type":"ContainerStarted","Data":"92a994b37e6941d81d727215909c4bfc5a044adc416dbafc73a53c165965cac7"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.787602 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.794650 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.794883 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.294827274 +0000 UTC m=+231.310909677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.794926 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-utilities\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.794990 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-catalog-content\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.795137 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.795247 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66klh\" (UniqueName: \"kubernetes.io/projected/e2f9d5d5-9771-4294-961f-110aa2430e29-kube-api-access-66klh\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.797500 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.297484995 +0000 UTC m=+231.313567398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.798119 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-utilities\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.798554 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bdvmc" event={"ID":"c373b1ca-aaa6-4ee3-b8c3-769d43586a03","Type":"ContainerStarted","Data":"40fd92150a54ac6bd11af8f386961519531add3e10419b7003cf29741bfce1cc"} Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.799315 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-catalog-content\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.802572 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kl2lj"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.802630 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.812747 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.857879 5029 patch_prober.go:28] interesting pod/console-operator-58897d9998-bdvmc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.857983 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bdvmc" podUID="c373b1ca-aaa6-4ee3-b8c3-769d43586a03" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.859040 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.859131 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-x5x9w"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.873179 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" event={"ID":"c6fda68b-609a-4564-9fd8-ccfd526fa9de","Type":"ContainerStarted","Data":"22610d47f11da825dc0b5f68a1a724d2b1ed6f8092ad32ece244a800d729a5d5"} Mar 13 20:31:10 crc kubenswrapper[5029]: W0313 20:31:10.873596 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33a7299c_87fb_43cf_a916_1946c218ad78.slice/crio-2bb0fc9fb83fd7c01b5c0645525109d272e57134fceea5fe24727690397e0a04 WatchSource:0}: Error finding container 2bb0fc9fb83fd7c01b5c0645525109d272e57134fceea5fe24727690397e0a04: Status 404 returned error can't find the container with id 2bb0fc9fb83fd7c01b5c0645525109d272e57134fceea5fe24727690397e0a04 Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.889258 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhrxn" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.891861 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66klh\" (UniqueName: \"kubernetes.io/projected/e2f9d5d5-9771-4294-961f-110aa2430e29-kube-api-access-66klh\") pod \"certified-operators-qz4wv\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.897707 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.905154 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.405108041 +0000 UTC m=+231.421190444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.906413 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.906814 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-catalog-content\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.906947 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-utilities\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.906978 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk6pk\" (UniqueName: \"kubernetes.io/projected/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-kube-api-access-qk6pk\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:10 crc kubenswrapper[5029]: E0313 20:31:10.915306 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.415287345 +0000 UTC m=+231.431369748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.957134 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.967153 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5jkhw"] Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.970293 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:10 crc kubenswrapper[5029]: I0313 20:31:10.976200 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jkhw"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.008687 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.008954 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgj84\" (UniqueName: \"kubernetes.io/projected/9d4a1347-08c4-42b0-9fb6-268fdc83147f-kube-api-access-bgj84\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.009065 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-catalog-content\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.009164 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.50914853 +0000 UTC m=+231.525230933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.009518 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-catalog-content\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.009717 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-utilities\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.009792 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk6pk\" (UniqueName: \"kubernetes.io/projected/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-kube-api-access-qk6pk\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.009965 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.010042 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-utilities\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.010145 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-catalog-content\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.011829 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.511816582 +0000 UTC m=+231.527898985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.012328 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-utilities\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.013831 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.016321 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" podStartSLOduration=181.016167829 podStartE2EDuration="3m1.016167829s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:11.005623725 +0000 UTC m=+231.021706128" watchObservedRunningTime="2026-03-13 20:31:11.016167829 +0000 UTC m=+231.032250232" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.042381 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rvlhd" podStartSLOduration=182.042357674 podStartE2EDuration="3m2.042357674s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:11.036799794 +0000 UTC m=+231.052882197" watchObservedRunningTime="2026-03-13 20:31:11.042357674 +0000 UTC m=+231.058440077" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.052616 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk6pk\" (UniqueName: \"kubernetes.io/projected/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-kube-api-access-qk6pk\") pod \"community-operators-kl2lj\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.053046 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7"] Mar 13 20:31:11 crc kubenswrapper[5029]: W0313 20:31:11.101492 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod067c1734_d7ab_4e50_b020_1b65f0350169.slice/crio-d8cb56af1807806bfb7f2a6bd8bc696395c8304ab1bcc36d53f1e388426b10ce WatchSource:0}: Error finding container d8cb56af1807806bfb7f2a6bd8bc696395c8304ab1bcc36d53f1e388426b10ce: Status 404 returned error can't find the container with id d8cb56af1807806bfb7f2a6bd8bc696395c8304ab1bcc36d53f1e388426b10ce Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.103118 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.116922 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.117309 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-utilities\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.117336 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-catalog-content\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.117881 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgj84\" (UniqueName: \"kubernetes.io/projected/9d4a1347-08c4-42b0-9fb6-268fdc83147f-kube-api-access-bgj84\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.118277 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.618254776 +0000 UTC m=+231.634337179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.119145 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-catalog-content\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.119302 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-utilities\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.133771 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bdvmc" podStartSLOduration=182.133753983 podStartE2EDuration="3m2.133753983s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:11.115697517 +0000 UTC m=+231.131779920" watchObservedRunningTime="2026-03-13 20:31:11.133753983 +0000 UTC m=+231.149836386" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.147566 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgj84\" (UniqueName: \"kubernetes.io/projected/9d4a1347-08c4-42b0-9fb6-268fdc83147f-kube-api-access-bgj84\") pod \"certified-operators-5jkhw\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.151035 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-494x8"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.160580 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-494x8"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.162744 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.219557 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.219614 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-catalog-content\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.219654 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-utilities\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.219694 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnhl4\" (UniqueName: \"kubernetes.io/projected/3c8fadb2-962e-4bca-8305-a51b8d2334bb-kube-api-access-tnhl4\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.220058 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.720042955 +0000 UTC m=+231.736125358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.223028 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws"] Mar 13 20:31:11 crc kubenswrapper[5029]: W0313 20:31:11.224554 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8143251f_c7f9_42a8_a7ad_dfd9d5f87a05.slice/crio-393948d6eea55e97a1be91dbcc6d95607134460f62fe72255b8b267c432ebad8 WatchSource:0}: Error finding container 393948d6eea55e97a1be91dbcc6d95607134460f62fe72255b8b267c432ebad8: Status 404 returned error can't find the container with id 393948d6eea55e97a1be91dbcc6d95607134460f62fe72255b8b267c432ebad8 Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.247569 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58704: no serving certificate available for the kubelet" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.261641 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.284025 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.311643 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sxmb7"] Mar 13 20:31:11 crc kubenswrapper[5029]: W0313 20:31:11.319605 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb5e50b8_e1b8_4351_9556_d4da3816791d.slice/crio-31077d3cab981da08dfb39dba572eed1c0327eade97442e6efedf739b43ac3b5 WatchSource:0}: Error finding container 31077d3cab981da08dfb39dba572eed1c0327eade97442e6efedf739b43ac3b5: Status 404 returned error can't find the container with id 31077d3cab981da08dfb39dba572eed1c0327eade97442e6efedf739b43ac3b5 Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.320677 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.320917 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-catalog-content\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.320957 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-utilities\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.321003 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.820987381 +0000 UTC m=+231.837069784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.321047 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnhl4\" (UniqueName: \"kubernetes.io/projected/3c8fadb2-962e-4bca-8305-a51b8d2334bb-kube-api-access-tnhl4\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.321312 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-catalog-content\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.321602 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-utilities\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.334673 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.340658 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-frlln"] Mar 13 20:31:11 crc kubenswrapper[5029]: W0313 20:31:11.359132 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31b17eb1_07a9_4cfb_9589_e45a4ac62791.slice/crio-a513b187261140dfd66766aed15ef5067ce1f516a480a169e01a86a41a6a1f69 WatchSource:0}: Error finding container a513b187261140dfd66766aed15ef5067ce1f516a480a169e01a86a41a6a1f69: Status 404 returned error can't find the container with id a513b187261140dfd66766aed15ef5067ce1f516a480a169e01a86a41a6a1f69 Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.365809 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dg27c"] Mar 13 20:31:11 crc kubenswrapper[5029]: W0313 20:31:11.369814 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf8749c5_afa5_48fa_a7a4_a63a7754e27f.slice/crio-6716776cd324eb96385061896a5c62e5db42c88aad12b3a7a09426268d532066 WatchSource:0}: Error finding container 6716776cd324eb96385061896a5c62e5db42c88aad12b3a7a09426268d532066: Status 404 returned error can't find the container with id 6716776cd324eb96385061896a5c62e5db42c88aad12b3a7a09426268d532066 Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.371491 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557230-trnjq"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.373978 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dh52p"] Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.374128 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnhl4\" (UniqueName: \"kubernetes.io/projected/3c8fadb2-962e-4bca-8305-a51b8d2334bb-kube-api-access-tnhl4\") pod \"community-operators-494x8\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.405784 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.421506 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.422082 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:11.922070371 +0000 UTC m=+231.938152764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: W0313 20:31:11.472648 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddd8ae7_2043_4d10_bd7f_f94801bbb3cd.slice/crio-1d482eb449acacdfed0b93c85904f3981ff912b09dadb970f18960f796049b8b WatchSource:0}: Error finding container 1d482eb449acacdfed0b93c85904f3981ff912b09dadb970f18960f796049b8b: Status 404 returned error can't find the container with id 1d482eb449acacdfed0b93c85904f3981ff912b09dadb970f18960f796049b8b Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.484995 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.485145 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:11 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:11 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:11 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.485205 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.505099 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-494x8" Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.523715 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.524488 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.024467186 +0000 UTC m=+232.040549589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.630107 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.630841 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.130822578 +0000 UTC m=+232.146904981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.735710 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.735967 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.235911956 +0000 UTC m=+232.251994369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.736266 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.736614 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.236593324 +0000 UTC m=+232.252675727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.743699 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz4wv"] Mar 13 20:31:11 crc kubenswrapper[5029]: W0313 20:31:11.813230 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2f9d5d5_9771_4294_961f_110aa2430e29.slice/crio-00a5f80e1538f209406fd87a2ae3d8a5e1bae524bf542d10345838fe04b5283b WatchSource:0}: Error finding container 00a5f80e1538f209406fd87a2ae3d8a5e1bae524bf542d10345838fe04b5283b: Status 404 returned error can't find the container with id 00a5f80e1538f209406fd87a2ae3d8a5e1bae524bf542d10345838fe04b5283b Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.837077 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.837230 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.337206531 +0000 UTC m=+232.353288934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.837531 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.838066 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.338046184 +0000 UTC m=+232.354128587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.917788 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"18732170ab063bfce4d160084303ec43fb66c1b4208229bbc2fe55c95219e342"} Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.933174 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" event={"ID":"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05","Type":"ContainerStarted","Data":"393948d6eea55e97a1be91dbcc6d95607134460f62fe72255b8b267c432ebad8"} Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.940877 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:11 crc kubenswrapper[5029]: E0313 20:31:11.941400 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.441371994 +0000 UTC m=+232.457454397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.986837 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" event={"ID":"33a7299c-87fb-43cf-a916-1946c218ad78","Type":"ContainerStarted","Data":"a48d0061b5bf28c65be9407124724bacccfb1f85fe070cb7359ba393da1d708e"} Mar 13 20:31:11 crc kubenswrapper[5029]: I0313 20:31:11.986902 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" event={"ID":"33a7299c-87fb-43cf-a916-1946c218ad78","Type":"ContainerStarted","Data":"2bb0fc9fb83fd7c01b5c0645525109d272e57134fceea5fe24727690397e0a04"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:11.996300 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" event={"ID":"ffa8ff7a-7787-4ad4-a176-7ae0c7c5b9f1","Type":"ContainerStarted","Data":"fb6814c0bff84b2f6a0bd5073ce4b8355aec2ce867efd9adae8e85e69559c48b"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.046745 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.048763 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.548751183 +0000 UTC m=+232.564833586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.052126 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0ec8540fe473a6a77bcd56925555cc83c0fc5e4d8ef171b10bb9951b46869441"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.074194 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kl2lj"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.086191 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" event={"ID":"31b17eb1-07a9-4cfb-9589-e45a4ac62791","Type":"ContainerStarted","Data":"a513b187261140dfd66766aed15ef5067ce1f516a480a169e01a86a41a6a1f69"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.100574 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhh4f" podStartSLOduration=183.100537747 podStartE2EDuration="3m3.100537747s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.068833694 +0000 UTC m=+232.084916097" watchObservedRunningTime="2026-03-13 20:31:12.100537747 +0000 UTC m=+232.116620170" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.116558 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-x8hs6" podStartSLOduration=182.116527497 podStartE2EDuration="3m2.116527497s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.111606315 +0000 UTC m=+232.127688728" watchObservedRunningTime="2026-03-13 20:31:12.116527497 +0000 UTC m=+232.132609900" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.117176 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" event={"ID":"5db2bce8-6a97-4593-9780-39b314a116b2","Type":"ContainerStarted","Data":"60c145ff53375c1457dd43aa33ddad3438cab0356334db81125828e67ffb9772"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.149759 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" event={"ID":"ea188f71-10c4-410b-bcb1-766aa053182d","Type":"ContainerStarted","Data":"acf4fad14f51abefa9828a602aa3bc2daedd6a51eb642b08d952e009b296fc8f"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.157962 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.158802 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.658775314 +0000 UTC m=+232.674857717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.189631 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"09573831c8eaf2140eb99922f7d8e78ff5fed074c5acedd8fede62363cc50483"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.200635 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" event={"ID":"bdc59b31-dc24-48fe-ba01-865f51aaf2cc","Type":"ContainerStarted","Data":"eba4b14369b53cf1d89a38b1cdc18a5cca9e36c6630de7b1a7943b5184582a0e"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.208560 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" event={"ID":"fb5e50b8-e1b8-4351-9556-d4da3816791d","Type":"ContainerStarted","Data":"31077d3cab981da08dfb39dba572eed1c0327eade97442e6efedf739b43ac3b5"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.209561 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" event={"ID":"f94341bb-1e1c-4a8d-bf68-92658a9c0632","Type":"ContainerStarted","Data":"d518dcb692b9fa5a5c135b580961f0d7027ff51a47d82ce09e78b2c86dca38b3"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.260718 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.261559 5029 generic.go:334] "Generic (PLEG): container finished" podID="c5787c5c-be3a-43cc-bf49-46573f2b31c1" containerID="63b67ec85c355fcfe036a6a91f824b2b32dd222db7b85d8bd02e50e8f26f4717" exitCode=0 Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.261677 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" event={"ID":"c5787c5c-be3a-43cc-bf49-46573f2b31c1","Type":"ContainerDied","Data":"63b67ec85c355fcfe036a6a91f824b2b32dd222db7b85d8bd02e50e8f26f4717"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.261706 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" event={"ID":"c5787c5c-be3a-43cc-bf49-46573f2b31c1","Type":"ContainerStarted","Data":"b7f639190480143f7308f5d254e109a69cb6c6df5fe1f5cf0155727df7d33ff1"} Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.280413 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.780385356 +0000 UTC m=+232.796467759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.297233 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" event={"ID":"1dab1066-bb46-406d-b993-4e6ca669447f","Type":"ContainerStarted","Data":"e5fb9b1ebe140aa6ceffd9a96f725f52ea56e3339eaf4b2fd8a91072d145abf9"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.306088 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dh52p" event={"ID":"87be7113-65b4-48fc-9c93-a7bbb0bf9136","Type":"ContainerStarted","Data":"34a48598d050ba68d6a5d37b12f69a300afe360b04213be905d9036d6f29029e"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.328668 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" event={"ID":"8a1ea22d-3be3-412d-be38-ab360aae90e5","Type":"ContainerStarted","Data":"ad8dab26cfeba72cdee42cdffe8732ceb3ebfa648dc0198963af9da8b5eff610"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.329631 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.338008 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9h8rj"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.346362 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" podUID="d3a5bbe6-2908-4756-9e53-58240ec41df8" containerName="controller-manager" containerID="cri-o://a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54" gracePeriod=30 Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.351416 5029 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zb64j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/healthz\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.352694 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" podUID="8a1ea22d-3be3-412d-be38-ab360aae90e5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.14:8080/healthz\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.361693 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.362001 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.861972971 +0000 UTC m=+232.878055374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.368635 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.370896 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.87088145 +0000 UTC m=+232.886963853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.375836 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.377219 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sxmb7" event={"ID":"cf8749c5-afa5-48fa-a7a4-a63a7754e27f","Type":"ContainerStarted","Data":"6716776cd324eb96385061896a5c62e5db42c88aad12b3a7a09426268d532066"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.395343 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ttzqw" podStartSLOduration=183.395316508 podStartE2EDuration="3m3.395316508s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.353800041 +0000 UTC m=+232.369882444" watchObservedRunningTime="2026-03-13 20:31:12.395316508 +0000 UTC m=+232.411398911" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.399893 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jkhw"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.399937 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.400167 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" event={"ID":"c6fda68b-609a-4564-9fd8-ccfd526fa9de","Type":"ContainerStarted","Data":"efd3dcfc143c31453325a2f83803f8d7a017af2237073eb141a0f294b53657fb"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.420468 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" podStartSLOduration=182.420444834 podStartE2EDuration="3m2.420444834s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.405594595 +0000 UTC m=+232.421677008" watchObservedRunningTime="2026-03-13 20:31:12.420444834 +0000 UTC m=+232.436527237" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.436761 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" event={"ID":"82531657-5b20-4b32-a23c-3dbe4370c657","Type":"ContainerStarted","Data":"680f9d6111ca459ba8b44c5919526ae6452273ffb99adfd5bff8eb6f628a94d4"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.436831 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" event={"ID":"82531657-5b20-4b32-a23c-3dbe4370c657","Type":"ContainerStarted","Data":"10b589b3271cbacf5167de8ae7486690fc10d6da86b8c7c0bcd057dd4c5dec7c"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.447281 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7vv8" podStartSLOduration=182.447243636 podStartE2EDuration="3m2.447243636s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.433708311 +0000 UTC m=+232.449790724" watchObservedRunningTime="2026-03-13 20:31:12.447243636 +0000 UTC m=+232.463326039" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.456644 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-frlln" event={"ID":"a301620b-657c-46c0-a1a4-f7774e38f273","Type":"ContainerStarted","Data":"eeeb339a48b55e3d67309e67023dcbabd8c4a61250fbe0770c607b2b9108d0fb"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.469419 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x5x9w" event={"ID":"55243e70-3d3c-44df-ac61-d298330ff633","Type":"ContainerStarted","Data":"9f19a0c50c9f434d296064572b9516116c8eab810fdbef2e67bdd78798facb04"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.469462 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x5x9w" event={"ID":"55243e70-3d3c-44df-ac61-d298330ff633","Type":"ContainerStarted","Data":"d0461c9f1a626dc9774a7aaa14f5665f277919d81f2869cf1ef49482203ec384"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.469640 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.470621 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-x5x9w" Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.470874 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.97082916 +0000 UTC m=+232.986911563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.475701 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.487669 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:12.987623272 +0000 UTC m=+233.003705675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.503356 5029 patch_prober.go:28] interesting pod/downloads-7954f5f757-x5x9w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.503413 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x5x9w" podUID="55243e70-3d3c-44df-ac61-d298330ff633" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.505298 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557230-trnjq" event={"ID":"5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd","Type":"ContainerStarted","Data":"1d482eb449acacdfed0b93c85904f3981ff912b09dadb970f18960f796049b8b"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.505507 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:12 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:12 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:12 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.505528 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.513966 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" event={"ID":"a0d54d7e-5ec4-46ce-b90e-96e976596cc3","Type":"ContainerStarted","Data":"10cd937b209b5bcfe24c21a63ff3edde4801916c780732007f466c317864610c"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.522796 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-v97fz" podStartSLOduration=182.522783348 podStartE2EDuration="3m2.522783348s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.520462566 +0000 UTC m=+232.536544989" watchObservedRunningTime="2026-03-13 20:31:12.522783348 +0000 UTC m=+232.538865751" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.534409 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" event={"ID":"784d49a1-0554-4b42-aa6b-35f4ab0dcc7a","Type":"ContainerStarted","Data":"d0602d939efd32a76dd0abf5a1c8134a3b35b21cb9a29d50e7b78e1d26567f25"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.553676 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" event={"ID":"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7","Type":"ContainerStarted","Data":"f32f39b1c4adea4d76cd7239be1d5ccdfd2eab1bff99401f43ba4bc303472329"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.553721 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" event={"ID":"2e1618f0-bd7b-48fb-aeed-213d80e0c1e7","Type":"ContainerStarted","Data":"dfd2bfdaf4d9ca12294c59b692da2e42819a7daa432d40ac03023b4aa73c04f8"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.557984 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-x5x9w" podStartSLOduration=183.557960294 podStartE2EDuration="3m3.557960294s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.550661399 +0000 UTC m=+232.566743802" watchObservedRunningTime="2026-03-13 20:31:12.557960294 +0000 UTC m=+232.574042697" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.592143 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.607479 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.107431596 +0000 UTC m=+233.123513999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.611709 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.614474 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.114441814 +0000 UTC m=+233.130524217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.634294 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t8qbl" podStartSLOduration=182.634269207 podStartE2EDuration="3m2.634269207s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.601897817 +0000 UTC m=+232.617980230" watchObservedRunningTime="2026-03-13 20:31:12.634269207 +0000 UTC m=+232.650351610" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.660030 5029 ???:1] "http: TLS handshake error from 192.168.126.11:58710: no serving certificate available for the kubelet" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.718000 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.718770 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.218740991 +0000 UTC m=+233.234823394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.720383 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" event={"ID":"067c1734-d7ab-4e50-b020-1b65f0350169","Type":"ContainerStarted","Data":"d8cb56af1807806bfb7f2a6bd8bc696395c8304ab1bcc36d53f1e388426b10ce"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.725665 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.761195 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz4wv" event={"ID":"e2f9d5d5-9771-4294-961f-110aa2430e29","Type":"ContainerStarted","Data":"00a5f80e1538f209406fd87a2ae3d8a5e1bae524bf542d10345838fe04b5283b"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.761297 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.761318 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2xlnz"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.762699 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.763097 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.779234 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.779310 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.779234 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.782275 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xlnz"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.820230 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-utilities\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.820305 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-catalog-content\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.820337 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6de239-aeab-4880-8086-72be45fe1cab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1d6de239-aeab-4880-8086-72be45fe1cab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.820363 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xv76\" (UniqueName: \"kubernetes.io/projected/553bdc43-797f-401f-9ca0-875060ab0553-kube-api-access-8xv76\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.820393 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6de239-aeab-4880-8086-72be45fe1cab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1d6de239-aeab-4880-8086-72be45fe1cab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.820435 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.821242 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.321223788 +0000 UTC m=+233.337306191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.841573 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" event={"ID":"e6046521-c7e4-4f5d-b5ad-81e436fe2d1f","Type":"ContainerStarted","Data":"525dc3990c68407ba4c9526ec31c19b7b2fa4c3b78dfcb72a63c784e87ac0c1f"} Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.899372 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-494x8"] Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.921846 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.927762 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-catalog-content\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.927892 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6de239-aeab-4880-8086-72be45fe1cab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1d6de239-aeab-4880-8086-72be45fe1cab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.927988 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xv76\" (UniqueName: \"kubernetes.io/projected/553bdc43-797f-401f-9ca0-875060ab0553-kube-api-access-8xv76\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.929331 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6de239-aeab-4880-8086-72be45fe1cab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1d6de239-aeab-4880-8086-72be45fe1cab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.929816 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-utilities\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:12 crc kubenswrapper[5029]: E0313 20:31:12.930931 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.430904459 +0000 UTC m=+233.446986862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.937712 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-catalog-content\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.951824 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6de239-aeab-4880-8086-72be45fe1cab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1d6de239-aeab-4880-8086-72be45fe1cab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:12 crc kubenswrapper[5029]: I0313 20:31:12.957093 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-utilities\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.013049 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bdvmc" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.022464 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xv76\" (UniqueName: \"kubernetes.io/projected/553bdc43-797f-401f-9ca0-875060ab0553-kube-api-access-8xv76\") pod \"redhat-marketplace-2xlnz\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.031174 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.031864 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.531835845 +0000 UTC m=+233.547918248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.033500 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6de239-aeab-4880-8086-72be45fe1cab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1d6de239-aeab-4880-8086-72be45fe1cab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.043518 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" podStartSLOduration=184.043500058 podStartE2EDuration="3m4.043500058s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:12.955618734 +0000 UTC m=+232.971701127" watchObservedRunningTime="2026-03-13 20:31:13.043500058 +0000 UTC m=+233.059582461" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.059475 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-89hjw" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.116894 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.135724 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.136165 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.636142542 +0000 UTC m=+233.652224955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.149997 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dhg5r"] Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.164257 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.167661 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhg5r"] Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.241725 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6k4n\" (UniqueName: \"kubernetes.io/projected/866c95e1-566b-4e67-8822-b6c182cb3378-kube-api-access-q6k4n\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.241795 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-utilities\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.241991 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-catalog-content\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.242077 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.247309 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.747279772 +0000 UTC m=+233.763362175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.255727 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.343115 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.343503 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6k4n\" (UniqueName: \"kubernetes.io/projected/866c95e1-566b-4e67-8822-b6c182cb3378-kube-api-access-q6k4n\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.343545 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-utilities\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.343598 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-catalog-content\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.344379 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-catalog-content\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.345277 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.845246978 +0000 UTC m=+233.861329381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.345722 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-utilities\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.387720 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.389474 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6k4n\" (UniqueName: \"kubernetes.io/projected/866c95e1-566b-4e67-8822-b6c182cb3378-kube-api-access-q6k4n\") pod \"redhat-marketplace-dhg5r\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.446899 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a5bbe6-2908-4756-9e53-58240ec41df8-serving-cert\") pod \"d3a5bbe6-2908-4756-9e53-58240ec41df8\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.446962 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-proxy-ca-bundles\") pod \"d3a5bbe6-2908-4756-9e53-58240ec41df8\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.446986 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-client-ca\") pod \"d3a5bbe6-2908-4756-9e53-58240ec41df8\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.447020 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-config\") pod \"d3a5bbe6-2908-4756-9e53-58240ec41df8\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.447257 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66hbm\" (UniqueName: \"kubernetes.io/projected/d3a5bbe6-2908-4756-9e53-58240ec41df8-kube-api-access-66hbm\") pod \"d3a5bbe6-2908-4756-9e53-58240ec41df8\" (UID: \"d3a5bbe6-2908-4756-9e53-58240ec41df8\") " Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.447456 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.447806 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:13.947791887 +0000 UTC m=+233.963874290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.460008 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d3a5bbe6-2908-4756-9e53-58240ec41df8" (UID: "d3a5bbe6-2908-4756-9e53-58240ec41df8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.460659 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-config" (OuterVolumeSpecName: "config") pod "d3a5bbe6-2908-4756-9e53-58240ec41df8" (UID: "d3a5bbe6-2908-4756-9e53-58240ec41df8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.461172 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3a5bbe6-2908-4756-9e53-58240ec41df8" (UID: "d3a5bbe6-2908-4756-9e53-58240ec41df8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.481041 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a5bbe6-2908-4756-9e53-58240ec41df8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3a5bbe6-2908-4756-9e53-58240ec41df8" (UID: "d3a5bbe6-2908-4756-9e53-58240ec41df8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.481537 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a5bbe6-2908-4756-9e53-58240ec41df8-kube-api-access-66hbm" (OuterVolumeSpecName: "kube-api-access-66hbm") pod "d3a5bbe6-2908-4756-9e53-58240ec41df8" (UID: "d3a5bbe6-2908-4756-9e53-58240ec41df8"). InnerVolumeSpecName "kube-api-access-66hbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.511220 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:13 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:13 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:13 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.511291 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.556363 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.557772 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.558600 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.058569787 +0000 UTC m=+234.074652190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.582643 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.587524 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.087486676 +0000 UTC m=+234.103569069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.588434 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a5bbe6-2908-4756-9e53-58240ec41df8-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.588497 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.588520 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.588539 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a5bbe6-2908-4756-9e53-58240ec41df8-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.588557 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66hbm\" (UniqueName: \"kubernetes.io/projected/d3a5bbe6-2908-4756-9e53-58240ec41df8-kube-api-access-66hbm\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.694454 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.694874 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.194842835 +0000 UTC m=+234.210925238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.732356 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vpzl2"] Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.732755 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a5bbe6-2908-4756-9e53-58240ec41df8" containerName="controller-manager" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.732812 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a5bbe6-2908-4756-9e53-58240ec41df8" containerName="controller-manager" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.732998 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a5bbe6-2908-4756-9e53-58240ec41df8" containerName="controller-manager" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.734220 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.746176 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpzl2"] Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.748217 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.803358 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6cd7\" (UniqueName: \"kubernetes.io/projected/5760820d-9df0-4f3e-b14f-1c64e2607ecd-kube-api-access-q6cd7\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.803428 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-utilities\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.803503 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-catalog-content\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.803553 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.804078 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.304055233 +0000 UTC m=+234.320137636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.917985 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.918865 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-catalog-content\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.919009 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6cd7\" (UniqueName: \"kubernetes.io/projected/5760820d-9df0-4f3e-b14f-1c64e2607ecd-kube-api-access-q6cd7\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.919048 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-utilities\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.919970 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-utilities\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.920257 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-catalog-content\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: E0313 20:31:13.920369 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.420344742 +0000 UTC m=+234.436427335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.920505 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dh52p" event={"ID":"87be7113-65b4-48fc-9c93-a7bbb0bf9136","Type":"ContainerStarted","Data":"cb178f1b34ae279a9b0f3453219381788767d7fecb367302402585a31fd3aad8"} Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.970699 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" event={"ID":"f94341bb-1e1c-4a8d-bf68-92658a9c0632","Type":"ContainerStarted","Data":"60b3dfc22e978b3d169134e522bac4a6a73af6fd0404a46c3d958481775730fa"} Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.971367 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6cd7\" (UniqueName: \"kubernetes.io/projected/5760820d-9df0-4f3e-b14f-1c64e2607ecd-kube-api-access-q6cd7\") pod \"redhat-operators-vpzl2\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:13 crc kubenswrapper[5029]: I0313 20:31:13.976428 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4b4b24cbe09bf7c21171a795147ebd5007cc96deaaaac8dc4bb7cdddc6921fd9"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.008979 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-dg27c" podStartSLOduration=184.008952916 podStartE2EDuration="3m4.008952916s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:13.999625006 +0000 UTC m=+234.015707409" watchObservedRunningTime="2026-03-13 20:31:14.008952916 +0000 UTC m=+234.025035319" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.014207 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" event={"ID":"31b17eb1-07a9-4cfb-9589-e45a4ac62791","Type":"ContainerStarted","Data":"d4fdb77c6e274602151e690667b70c547d8f8909f296e910cacd479cf3b29248"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.014638 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" event={"ID":"31b17eb1-07a9-4cfb-9589-e45a4ac62791","Type":"ContainerStarted","Data":"c38b0278fcbb4d4fb316651bbb145e3379ca65a3fcd6b042315306f4f016e896"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.021741 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.022350 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.522323236 +0000 UTC m=+234.538405639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.044124 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.067454 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56bf9885bd-62hzm"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.072978 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.088366 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"88857a34b0790a1feb2f215808731e80a2b007fdda5769c7b98d53e7adb4402b"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.089236 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.125471 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.126907 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.626886169 +0000 UTC m=+234.642968592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.154031 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5v2t" podStartSLOduration=184.153998289 podStartE2EDuration="3m4.153998289s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:14.079425812 +0000 UTC m=+234.095508215" watchObservedRunningTime="2026-03-13 20:31:14.153998289 +0000 UTC m=+234.170080692" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.154238 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56bf9885bd-62hzm"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.175573 5029 generic.go:334] "Generic (PLEG): container finished" podID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerID="8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e" exitCode=0 Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.176483 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kl2lj" event={"ID":"e33b18fb-9cd7-4c30-bdb0-402734c47cc8","Type":"ContainerDied","Data":"8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.176515 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kl2lj" event={"ID":"e33b18fb-9cd7-4c30-bdb0-402734c47cc8","Type":"ContainerStarted","Data":"b5a162d06d896d138b3974572d22f746ddbf51300ea859bf8080250959d83a1c"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.224960 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s58vt"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.226703 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.229296 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-config\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.229339 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-proxy-ca-bundles\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.229377 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572b8404-0d6e-496e-933b-2b98551dcdcc-serving-cert\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.229448 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-client-ca\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.229483 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.229527 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx56r\" (UniqueName: \"kubernetes.io/projected/572b8404-0d6e-496e-933b-2b98551dcdcc-kube-api-access-xx56r\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.230007 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.729990914 +0000 UTC m=+234.746073317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.242729 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-frlln" event={"ID":"a301620b-657c-46c0-a1a4-f7774e38f273","Type":"ContainerStarted","Data":"56ece02f4ef039e9a91d0e9610d7895e00f49baf43bae7eecc6a99a0c26153e4"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.263966 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xlnz"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.279068 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s58vt"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.279127 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f20a8f80dbf81d2326eb4780d6d4c970123cbfaf9f3b07ecd0be1067170e2aa0"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.332900 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.333571 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz67n\" (UniqueName: \"kubernetes.io/projected/151390c1-ebb0-49bf-be99-3326fc839781-kube-api-access-gz67n\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.333603 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-client-ca\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.333636 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx56r\" (UniqueName: \"kubernetes.io/projected/572b8404-0d6e-496e-933b-2b98551dcdcc-kube-api-access-xx56r\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.333663 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-config\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.333683 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-proxy-ca-bundles\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.333701 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572b8404-0d6e-496e-933b-2b98551dcdcc-serving-cert\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.333741 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-catalog-content\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.333767 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-utilities\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.334075 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.834054834 +0000 UTC m=+234.850137237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.334584 5029 generic.go:334] "Generic (PLEG): container finished" podID="d3a5bbe6-2908-4756-9e53-58240ec41df8" containerID="a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54" exitCode=0 Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.334658 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" event={"ID":"d3a5bbe6-2908-4756-9e53-58240ec41df8","Type":"ContainerDied","Data":"a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.334690 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" event={"ID":"d3a5bbe6-2908-4756-9e53-58240ec41df8","Type":"ContainerDied","Data":"df47d6907789728c6cec05d88bee6a3c50f56a4322b1604570e6a0c0eeb15674"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.334714 5029 scope.go:117] "RemoveContainer" containerID="a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.334890 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9h8rj" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.356834 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-config\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.360435 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-client-ca\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.367754 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-proxy-ca-bundles\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.369527 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-494x8" event={"ID":"3c8fadb2-962e-4bca-8305-a51b8d2334bb","Type":"ContainerStarted","Data":"6c9c4c6fbf86c7fa3f5ff87aca6fcc111251a88b6b2b0e8dc6dbfe0513eb1cac"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.369894 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572b8404-0d6e-496e-933b-2b98551dcdcc-serving-cert\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.410162 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" event={"ID":"067c1734-d7ab-4e50-b020-1b65f0350169","Type":"ContainerStarted","Data":"4d15f1b9c4488645496d133bd9f6929e8db15eb680136bc3a8a1b240dd850043"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.410221 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" event={"ID":"067c1734-d7ab-4e50-b020-1b65f0350169","Type":"ContainerStarted","Data":"0170266b8f0269835889f929823e19df67a7c59f4422bb719496872c0ffa8f95"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.454688 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-catalog-content\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.473399 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-utilities\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.473546 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz67n\" (UniqueName: \"kubernetes.io/projected/151390c1-ebb0-49bf-be99-3326fc839781-kube-api-access-gz67n\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.473615 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.483510 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx56r\" (UniqueName: \"kubernetes.io/projected/572b8404-0d6e-496e-933b-2b98551dcdcc-kube-api-access-xx56r\") pod \"controller-manager-56bf9885bd-62hzm\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.485779 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-catalog-content\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.486343 5029 scope.go:117] "RemoveContainer" containerID="a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.486676 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-utilities\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.487021 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:14.98700158 +0000 UTC m=+235.003083983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.495277 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" event={"ID":"a0d54d7e-5ec4-46ce-b90e-96e976596cc3","Type":"ContainerStarted","Data":"26898c34cad0414a25e3581539c527d6ae95b607fac7b91d140ad0edae912b52"} Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.511060 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54\": container with ID starting with a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54 not found: ID does not exist" containerID="a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.511329 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54"} err="failed to get container status \"a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54\": rpc error: code = NotFound desc = could not find container \"a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54\": container with ID starting with a6af7c36843cdad71ce14f48d5260b96f4570508556692b8e65171b3f58dca54 not found: ID does not exist" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.511667 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:14 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:14 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:14 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.511918 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.520948 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" event={"ID":"5db2bce8-6a97-4593-9780-39b314a116b2","Type":"ContainerStarted","Data":"ddb9f5cf56981dd35b8349a86b35eb0b1f3991fa701cf8d8691bb2c7fad46a8e"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.522066 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.564080 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz67n\" (UniqueName: \"kubernetes.io/projected/151390c1-ebb0-49bf-be99-3326fc839781-kube-api-access-gz67n\") pod \"redhat-operators-s58vt\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.577607 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.579489 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.079467258 +0000 UTC m=+235.095549661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.596089 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" event={"ID":"ea188f71-10c4-410b-bcb1-766aa053182d","Type":"ContainerStarted","Data":"c5d749ef7a9fe04b1370c4e72fe0110894d885e09b9b0c4b0536f7e9900ad43c"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.596536 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhg5r"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.642042 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.667041 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ljj46" podStartSLOduration=184.657070555 podStartE2EDuration="3m4.657070555s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:14.647084227 +0000 UTC m=+234.663166640" watchObservedRunningTime="2026-03-13 20:31:14.657070555 +0000 UTC m=+234.673152958" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.680163 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.681027 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.18100079 +0000 UTC m=+235.197083193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.708295 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9h8rj"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.713749 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9h8rj"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.741223 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f45bw" podStartSLOduration=184.741186658 podStartE2EDuration="3m4.741186658s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:14.689736484 +0000 UTC m=+234.705818887" watchObservedRunningTime="2026-03-13 20:31:14.741186658 +0000 UTC m=+234.757269061" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.743936 5029 generic.go:334] "Generic (PLEG): container finished" podID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerID="ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0" exitCode=0 Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.743969 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz4wv" event={"ID":"e2f9d5d5-9771-4294-961f-110aa2430e29","Type":"ContainerDied","Data":"ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.782907 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h2sxq" podStartSLOduration=184.78287376 podStartE2EDuration="3m4.78287376s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:14.729210626 +0000 UTC m=+234.745293029" watchObservedRunningTime="2026-03-13 20:31:14.78287376 +0000 UTC m=+234.798956163" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.783644 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.784802 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.284782382 +0000 UTC m=+235.300864785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.786306 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.806021 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" event={"ID":"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05","Type":"ContainerStarted","Data":"7baec69ba59d0f99ecff59871af045e3b028b60ec6b590f4197a0324d8177833"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.859155 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" event={"ID":"fb5e50b8-e1b8-4351-9556-d4da3816791d","Type":"ContainerStarted","Data":"fb2692b779a98bb717add3dcced1c78f560b1c05daa649ab1b4846a925d049b0"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.859205 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" event={"ID":"fb5e50b8-e1b8-4351-9556-d4da3816791d","Type":"ContainerStarted","Data":"f43a4e8345a9bb01ad93a39488da02f5100c86858658112f4026ed25890ed5bb"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.865234 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" podStartSLOduration=74.865215626 podStartE2EDuration="1m14.865215626s" podCreationTimestamp="2026-03-13 20:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:14.864506298 +0000 UTC m=+234.880588701" watchObservedRunningTime="2026-03-13 20:31:14.865215626 +0000 UTC m=+234.881298019" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.883604 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" event={"ID":"bdc59b31-dc24-48fe-ba01-865f51aaf2cc","Type":"ContainerStarted","Data":"b498310c337cd301102d71c40d66241c300bfeba452e9b53cb9e0001636e09d8"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.884677 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.885762 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.886523 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.386504488 +0000 UTC m=+235.402586961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.896085 5029 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mrx25 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.896157 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" podUID="bdc59b31-dc24-48fe-ba01-865f51aaf2cc" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.899664 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sxmb7" event={"ID":"cf8749c5-afa5-48fa-a7a4-a63a7754e27f","Type":"ContainerStarted","Data":"883dd3e2cb0b2f22b13ce3b6b8589800ba2c07db5db2f1f7ee223af0f38466e1"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.919745 5029 generic.go:334] "Generic (PLEG): container finished" podID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerID="e1b2e24f22b81535fd96f08f41a9c957514a896f37fe4a81436d8c088be2b20a" exitCode=0 Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.924335 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8bsws" podStartSLOduration=184.924293946 podStartE2EDuration="3m4.924293946s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:14.913879235 +0000 UTC m=+234.929961658" watchObservedRunningTime="2026-03-13 20:31:14.924293946 +0000 UTC m=+234.940376349" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.924491 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jkhw" event={"ID":"9d4a1347-08c4-42b0-9fb6-268fdc83147f","Type":"ContainerDied","Data":"e1b2e24f22b81535fd96f08f41a9c957514a896f37fe4a81436d8c088be2b20a"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.933764 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jkhw" event={"ID":"9d4a1347-08c4-42b0-9fb6-268fdc83147f","Type":"ContainerStarted","Data":"9f3a6991cd8150dc45662b600a848095550bb2f60d7b87f52ee72eb0b3cde4b8"} Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.934895 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" podUID="de7331b0-d805-4b94-909a-61de2cb70ce1" containerName="route-controller-manager" containerID="cri-o://60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a" gracePeriod=30 Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.937416 5029 patch_prober.go:28] interesting pod/downloads-7954f5f757-x5x9w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.937481 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x5x9w" podUID="55243e70-3d3c-44df-ac61-d298330ff633" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.942676 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.949515 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-sxmb7" podStartSLOduration=10.949491354 podStartE2EDuration="10.949491354s" podCreationTimestamp="2026-03-13 20:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:14.945629439 +0000 UTC m=+234.961711842" watchObservedRunningTime="2026-03-13 20:31:14.949491354 +0000 UTC m=+234.965573757" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.984831 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" podStartSLOduration=184.984807703 podStartE2EDuration="3m4.984807703s" podCreationTimestamp="2026-03-13 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:14.96757418 +0000 UTC m=+234.983656583" watchObservedRunningTime="2026-03-13 20:31:14.984807703 +0000 UTC m=+235.000890106" Mar 13 20:31:14 crc kubenswrapper[5029]: I0313 20:31:14.987216 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:14 crc kubenswrapper[5029]: E0313 20:31:14.989866 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.489824509 +0000 UTC m=+235.505906912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.096168 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.100011 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.599995484 +0000 UTC m=+235.616077887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.170427 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpzl2"] Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.203248 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.203680 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.703661593 +0000 UTC m=+235.719743996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.283013 5029 ???:1] "http: TLS handshake error from 192.168.126.11:41610: no serving certificate available for the kubelet" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.310515 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.311327 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.811310699 +0000 UTC m=+235.827393102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.411355 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.411749 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:15.911733622 +0000 UTC m=+235.927816025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.488540 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:15 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:15 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:15 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.488639 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.514830 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.515176 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.015163175 +0000 UTC m=+236.031245578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.615878 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.616318 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.116298746 +0000 UTC m=+236.132381149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.640793 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.658629 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s58vt"] Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.676421 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56bf9885bd-62hzm"] Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.718189 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-client-ca\") pod \"de7331b0-d805-4b94-909a-61de2cb70ce1\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.718307 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de7331b0-d805-4b94-909a-61de2cb70ce1-serving-cert\") pod \"de7331b0-d805-4b94-909a-61de2cb70ce1\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.718374 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-config\") pod \"de7331b0-d805-4b94-909a-61de2cb70ce1\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.718398 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnfnk\" (UniqueName: \"kubernetes.io/projected/de7331b0-d805-4b94-909a-61de2cb70ce1-kube-api-access-lnfnk\") pod \"de7331b0-d805-4b94-909a-61de2cb70ce1\" (UID: \"de7331b0-d805-4b94-909a-61de2cb70ce1\") " Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.718739 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.719163 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.219148214 +0000 UTC m=+236.235230617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.720082 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-client-ca" (OuterVolumeSpecName: "client-ca") pod "de7331b0-d805-4b94-909a-61de2cb70ce1" (UID: "de7331b0-d805-4b94-909a-61de2cb70ce1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.720605 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-config" (OuterVolumeSpecName: "config") pod "de7331b0-d805-4b94-909a-61de2cb70ce1" (UID: "de7331b0-d805-4b94-909a-61de2cb70ce1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.730327 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7331b0-d805-4b94-909a-61de2cb70ce1-kube-api-access-lnfnk" (OuterVolumeSpecName: "kube-api-access-lnfnk") pod "de7331b0-d805-4b94-909a-61de2cb70ce1" (UID: "de7331b0-d805-4b94-909a-61de2cb70ce1"). InnerVolumeSpecName "kube-api-access-lnfnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.741634 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7331b0-d805-4b94-909a-61de2cb70ce1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de7331b0-d805-4b94-909a-61de2cb70ce1" (UID: "de7331b0-d805-4b94-909a-61de2cb70ce1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.819458 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.819711 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de7331b0-d805-4b94-909a-61de2cb70ce1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.819724 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnfnk\" (UniqueName: \"kubernetes.io/projected/de7331b0-d805-4b94-909a-61de2cb70ce1-kube-api-access-lnfnk\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.819733 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.819740 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de7331b0-d805-4b94-909a-61de2cb70ce1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.819802 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.319787132 +0000 UTC m=+236.335869535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.921230 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:15 crc kubenswrapper[5029]: E0313 20:31:15.921570 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.42155911 +0000 UTC m=+236.437641513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.962538 5029 generic.go:334] "Generic (PLEG): container finished" podID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerID="d00e6288411ba217f63e2769ebd3036dc0b83ee3f28b33f2fe739e9096f53586" exitCode=0 Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.962685 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-494x8" event={"ID":"3c8fadb2-962e-4bca-8305-a51b8d2334bb","Type":"ContainerDied","Data":"d00e6288411ba217f63e2769ebd3036dc0b83ee3f28b33f2fe739e9096f53586"} Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.970795 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" event={"ID":"572b8404-0d6e-496e-933b-2b98551dcdcc","Type":"ContainerStarted","Data":"77b4148b9872ffc9717895702212c68975a97287f471afcc6e8df812169ab725"} Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.985807 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dh52p" event={"ID":"87be7113-65b4-48fc-9c93-a7bbb0bf9136","Type":"ContainerStarted","Data":"b16f96bb681111c7502809eb5de551a1a453c6c287b6d5960a7fb07406be1bd4"} Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.986024 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.993644 5029 generic.go:334] "Generic (PLEG): container finished" podID="866c95e1-566b-4e67-8822-b6c182cb3378" containerID="54090f592321456ce898dfcd173ae4d1baed9e0fb40af03eba9c6d37b429956a" exitCode=0 Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.993860 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhg5r" event={"ID":"866c95e1-566b-4e67-8822-b6c182cb3378","Type":"ContainerDied","Data":"54090f592321456ce898dfcd173ae4d1baed9e0fb40af03eba9c6d37b429956a"} Mar 13 20:31:15 crc kubenswrapper[5029]: I0313 20:31:15.993917 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhg5r" event={"ID":"866c95e1-566b-4e67-8822-b6c182cb3378","Type":"ContainerStarted","Data":"e901e01d1d1320324adcb84fb917a20e539d42dc05e04a0a78f56524948d179b"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.012373 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dh52p" podStartSLOduration=12.012342362 podStartE2EDuration="12.012342362s" podCreationTimestamp="2026-03-13 20:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:16.007935404 +0000 UTC m=+236.024017817" watchObservedRunningTime="2026-03-13 20:31:16.012342362 +0000 UTC m=+236.028424775" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.019486 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-frlln" event={"ID":"a301620b-657c-46c0-a1a4-f7774e38f273","Type":"ContainerStarted","Data":"a6fb42791b762997f307262dd06566c79237e8edc81d6a36d137734727c78ca5"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.022572 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.022783 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.522750452 +0000 UTC m=+236.538832855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.022926 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.023290 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.523274356 +0000 UTC m=+236.539356759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.033932 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" event={"ID":"c5787c5c-be3a-43cc-bf49-46573f2b31c1","Type":"ContainerStarted","Data":"2e483ea7158683f830d67b62f39a713306905e5610b97911c2535506c9977c79"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.034653 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.043425 5029 generic.go:334] "Generic (PLEG): container finished" podID="de7331b0-d805-4b94-909a-61de2cb70ce1" containerID="60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a" exitCode=0 Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.043609 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.044085 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" event={"ID":"de7331b0-d805-4b94-909a-61de2cb70ce1","Type":"ContainerDied","Data":"60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.044155 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5" event={"ID":"de7331b0-d805-4b94-909a-61de2cb70ce1","Type":"ContainerDied","Data":"9b9b09fa16a22d71465a9534a9ed7e3e73c1d00ef7cc88e42ef0b55f8f1e699a"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.044183 5029 scope.go:117] "RemoveContainer" containerID="60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.057769 5029 generic.go:334] "Generic (PLEG): container finished" podID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerID="e3d8f7abe397d5193498548aaeb0902ff939ac0331242f65adf27098bb856bd3" exitCode=0 Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.057840 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpzl2" event={"ID":"5760820d-9df0-4f3e-b14f-1c64e2607ecd","Type":"ContainerDied","Data":"e3d8f7abe397d5193498548aaeb0902ff939ac0331242f65adf27098bb856bd3"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.057937 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpzl2" event={"ID":"5760820d-9df0-4f3e-b14f-1c64e2607ecd","Type":"ContainerStarted","Data":"a1f42f1d7c167719aad66a7344fc38af406da12549f4c7946033c3de21439189"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.066958 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q"] Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.067577 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7331b0-d805-4b94-909a-61de2cb70ce1" containerName="route-controller-manager" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.067603 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7331b0-d805-4b94-909a-61de2cb70ce1" containerName="route-controller-manager" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.067929 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="de7331b0-d805-4b94-909a-61de2cb70ce1" containerName="route-controller-manager" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.073684 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.084424 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.084484 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.085170 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.085342 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.085515 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.085680 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.087223 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q"] Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.093024 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1d6de239-aeab-4880-8086-72be45fe1cab","Type":"ContainerStarted","Data":"b276936a19138b40edc2114d1b725bab93c33a124fdc2bdfb8979f9465cb9cfb"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.093059 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1d6de239-aeab-4880-8086-72be45fe1cab","Type":"ContainerStarted","Data":"9bdb403016e900bcf0afe1e63063e644585adeb17836d480a6dcfe1fd1ac4045"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.097622 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-frlln" podStartSLOduration=187.097595726 podStartE2EDuration="3m7.097595726s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:16.092812517 +0000 UTC m=+236.108894920" watchObservedRunningTime="2026-03-13 20:31:16.097595726 +0000 UTC m=+236.113678129" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.109658 5029 scope.go:117] "RemoveContainer" containerID="60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.113349 5029 generic.go:334] "Generic (PLEG): container finished" podID="553bdc43-797f-401f-9ca0-875060ab0553" containerID="d972d6e6b3bba28456fe83c77807be5be5bf83bbc79c0f35da697d66a40b1d39" exitCode=0 Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.113479 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xlnz" event={"ID":"553bdc43-797f-401f-9ca0-875060ab0553","Type":"ContainerDied","Data":"d972d6e6b3bba28456fe83c77807be5be5bf83bbc79c0f35da697d66a40b1d39"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.113529 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xlnz" event={"ID":"553bdc43-797f-401f-9ca0-875060ab0553","Type":"ContainerStarted","Data":"e46353742625e1e73694b5009cc17df6a74761763432c40ce5fe22e60b45a6e8"} Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.113562 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a\": container with ID starting with 60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a not found: ID does not exist" containerID="60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.114052 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a"} err="failed to get container status \"60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a\": rpc error: code = NotFound desc = could not find container \"60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a\": container with ID starting with 60b2a52a4c3f7da66f1cf3de65f2959e8b0eef7a79eddf637d395ab249b7568a not found: ID does not exist" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.122232 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s58vt" event={"ID":"151390c1-ebb0-49bf-be99-3326fc839781","Type":"ContainerStarted","Data":"f8d961af7a23f171fbcc1552334746a11ef342ef96c2a5b3c4484266afa2f444"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.122293 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s58vt" event={"ID":"151390c1-ebb0-49bf-be99-3326fc839781","Type":"ContainerStarted","Data":"fdfefcf4e8ed0082d885eb328f04c2183255c33405bb375e88d69ef802f95219"} Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.124499 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.126081 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.626045232 +0000 UTC m=+236.642127765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.138000 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mrx25" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.181113 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" podStartSLOduration=187.181090382 podStartE2EDuration="3m7.181090382s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:16.13714539 +0000 UTC m=+236.153227793" watchObservedRunningTime="2026-03-13 20:31:16.181090382 +0000 UTC m=+236.197172785" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.188982 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5"] Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.212127 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zkjm5"] Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.226628 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-config\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.226870 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-client-ca\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.226934 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrh8n\" (UniqueName: \"kubernetes.io/projected/9575b66a-e846-49ec-a7bb-03535765b414-kube-api-access-zrh8n\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.227033 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.227070 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9575b66a-e846-49ec-a7bb-03535765b414-serving-cert\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.236332 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.736303428 +0000 UTC m=+236.752385951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.329799 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.330109 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-config\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.330157 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-client-ca\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.330178 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrh8n\" (UniqueName: \"kubernetes.io/projected/9575b66a-e846-49ec-a7bb-03535765b414-kube-api-access-zrh8n\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.330212 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9575b66a-e846-49ec-a7bb-03535765b414-serving-cert\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.332313 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.332300731 podStartE2EDuration="4.332300731s" podCreationTimestamp="2026-03-13 20:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:16.331691625 +0000 UTC m=+236.347774028" watchObservedRunningTime="2026-03-13 20:31:16.332300731 +0000 UTC m=+236.348383134" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.335721 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-client-ca\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.335830 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.835813226 +0000 UTC m=+236.851895629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.349037 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-config\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.362897 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9575b66a-e846-49ec-a7bb-03535765b414-serving-cert\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.377308 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrh8n\" (UniqueName: \"kubernetes.io/projected/9575b66a-e846-49ec-a7bb-03535765b414-kube-api-access-zrh8n\") pod \"route-controller-manager-768bcf944c-nv55q\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.436033 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.439688 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.440069 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:16.940055851 +0000 UTC m=+236.956138254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.442953 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.444715 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.475024 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.490135 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:16 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:16 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:16 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.490205 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.541066 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:16 crc kubenswrapper[5029]: E0313 20:31:16.542439 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:17.042414544 +0000 UTC m=+237.058496987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.632576 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a5bbe6-2908-4756-9e53-58240ec41df8" path="/var/lib/kubelet/pods/d3a5bbe6-2908-4756-9e53-58240ec41df8/volumes" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.633329 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7331b0-d805-4b94-909a-61de2cb70ce1" path="/var/lib/kubelet/pods/de7331b0-d805-4b94-909a-61de2cb70ce1/volumes" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.847466 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.849029 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.893625 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.893990 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 13 20:31:16 crc kubenswrapper[5029]: I0313 20:31:16.894299 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.176342 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:17 crc kubenswrapper[5029]: E0313 20:31:17.191131 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.191094669 +0000 UTC m=+238.207177072 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.277907 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ade61b6-4b14-41e6-aef0-baf21400c50b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6ade61b6-4b14-41e6-aef0-baf21400c50b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.277977 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.277999 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ade61b6-4b14-41e6-aef0-baf21400c50b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6ade61b6-4b14-41e6-aef0-baf21400c50b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:17 crc kubenswrapper[5029]: E0313 20:31:17.278564 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:17.778550392 +0000 UTC m=+237.794632795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.330740 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" event={"ID":"572b8404-0d6e-496e-933b-2b98551dcdcc","Type":"ContainerStarted","Data":"6ed4d6a33e92249e1d67ed86c122980aeb440001b9af3d15ffed8407edc1aac2"} Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.331000 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.337903 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.379906 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:17 crc kubenswrapper[5029]: E0313 20:31:17.380228 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:17.880207408 +0000 UTC m=+237.896289811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.380345 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ade61b6-4b14-41e6-aef0-baf21400c50b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6ade61b6-4b14-41e6-aef0-baf21400c50b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.380378 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.380401 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ade61b6-4b14-41e6-aef0-baf21400c50b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6ade61b6-4b14-41e6-aef0-baf21400c50b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.380527 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ade61b6-4b14-41e6-aef0-baf21400c50b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6ade61b6-4b14-41e6-aef0-baf21400c50b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:17 crc kubenswrapper[5029]: E0313 20:31:17.381012 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:17.880983098 +0000 UTC m=+237.897065691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.383891 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" podStartSLOduration=5.383844715 podStartE2EDuration="5.383844715s" podCreationTimestamp="2026-03-13 20:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:17.356690354 +0000 UTC m=+237.372772757" watchObservedRunningTime="2026-03-13 20:31:17.383844715 +0000 UTC m=+237.399927118" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.414567 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ade61b6-4b14-41e6-aef0-baf21400c50b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6ade61b6-4b14-41e6-aef0-baf21400c50b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.428481 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ncp4l" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.479070 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.481430 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:17 crc kubenswrapper[5029]: E0313 20:31:17.481904 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:17.981818882 +0000 UTC m=+237.997901285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.491221 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:17 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:17 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:17 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.491277 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.493350 5029 generic.go:334] "Generic (PLEG): container finished" podID="1d6de239-aeab-4880-8086-72be45fe1cab" containerID="b276936a19138b40edc2114d1b725bab93c33a124fdc2bdfb8979f9465cb9cfb" exitCode=0 Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.493472 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1d6de239-aeab-4880-8086-72be45fe1cab","Type":"ContainerDied","Data":"b276936a19138b40edc2114d1b725bab93c33a124fdc2bdfb8979f9465cb9cfb"} Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.515794 5029 generic.go:334] "Generic (PLEG): container finished" podID="151390c1-ebb0-49bf-be99-3326fc839781" containerID="f8d961af7a23f171fbcc1552334746a11ef342ef96c2a5b3c4484266afa2f444" exitCode=0 Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.516491 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s58vt" event={"ID":"151390c1-ebb0-49bf-be99-3326fc839781","Type":"ContainerDied","Data":"f8d961af7a23f171fbcc1552334746a11ef342ef96c2a5b3c4484266afa2f444"} Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.522566 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5vkn2" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.548466 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.548638 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.557569 5029 patch_prober.go:28] interesting pod/console-f9d7485db-rvlhd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.557638 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rvlhd" podUID="38ba7d36-baaf-4e14-aa8e-5236ee9500de" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.584150 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.591285 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:17 crc kubenswrapper[5029]: E0313 20:31:17.593179 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.093160688 +0000 UTC m=+238.109243091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:17 crc kubenswrapper[5029]: I0313 20:31:17.697549 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:17 crc kubenswrapper[5029]: E0313 20:31:17.699165 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.199137669 +0000 UTC m=+238.215220142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:17.835038 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:17.835385 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.335370305 +0000 UTC m=+238.351452718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:17.835870 5029 patch_prober.go:28] interesting pod/downloads-7954f5f757-x5x9w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:17.835896 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x5x9w" podUID="55243e70-3d3c-44df-ac61-d298330ff633" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:17.836066 5029 patch_prober.go:28] interesting pod/downloads-7954f5f757-x5x9w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:17.836143 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x5x9w" podUID="55243e70-3d3c-44df-ac61-d298330ff633" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:17.937023 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:17.937165 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.437144733 +0000 UTC m=+238.453227136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:17.937609 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:17.938033 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.438022817 +0000 UTC m=+238.454105220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.041590 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.041931 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.541913623 +0000 UTC m=+238.557996026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.042162 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.042518 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.542510388 +0000 UTC m=+238.558592781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.143908 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.144008 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.643984139 +0000 UTC m=+238.660066542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.144382 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.144736 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.644724528 +0000 UTC m=+238.660806931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.246423 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.746404935 +0000 UTC m=+238.762487328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.246344 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.246961 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.247467 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.747306519 +0000 UTC m=+238.763388922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.498796 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.498997 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:18.998978701 +0000 UTC m=+239.015061104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.499953 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.500301 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.000291626 +0000 UTC m=+239.016374029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.503969 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:18 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:18 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:18 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.504202 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.523975 5029 generic.go:334] "Generic (PLEG): container finished" podID="8143251f-c7f9-42a8-a7ad-dfd9d5f87a05" containerID="7baec69ba59d0f99ecff59871af045e3b028b60ec6b590f4197a0324d8177833" exitCode=0 Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.524117 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" event={"ID":"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05","Type":"ContainerDied","Data":"7baec69ba59d0f99ecff59871af045e3b028b60ec6b590f4197a0324d8177833"} Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.531407 5029 ???:1] "http: TLS handshake error from 192.168.126.11:41612: no serving certificate available for the kubelet" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.601945 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.101920521 +0000 UTC m=+239.118002924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.602197 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.603156 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.603829 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.103818291 +0000 UTC m=+239.119900694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.706369 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.718819 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.210326557 +0000 UTC m=+239.226408960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.812944 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.813284 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.313272067 +0000 UTC m=+239.329354470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.838538 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q"] Mar 13 20:31:18 crc kubenswrapper[5029]: I0313 20:31:18.914797 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:18 crc kubenswrapper[5029]: E0313 20:31:18.915134 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.415113827 +0000 UTC m=+239.431196230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.016014 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.017243 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.517225725 +0000 UTC m=+239.533308128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.117773 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.118073 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.618045318 +0000 UTC m=+239.634127721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.118369 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.118823 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.618808208 +0000 UTC m=+239.634890611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.139161 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 13 20:31:19 crc kubenswrapper[5029]: W0313 20:31:19.176120 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6ade61b6_4b14_41e6_aef0_baf21400c50b.slice/crio-ea24f45a54a74749809d362d411363b3eba00912a3ad78cdf47fa73686d23ed9 WatchSource:0}: Error finding container ea24f45a54a74749809d362d411363b3eba00912a3ad78cdf47fa73686d23ed9: Status 404 returned error can't find the container with id ea24f45a54a74749809d362d411363b3eba00912a3ad78cdf47fa73686d23ed9 Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.219364 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.219624 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.71959956 +0000 UTC m=+239.735681973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.219693 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.229537 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.729521188 +0000 UTC m=+239.745603581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.241445 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.321874 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.322138 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.822113499 +0000 UTC m=+239.838195902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.322323 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6de239-aeab-4880-8086-72be45fe1cab-kubelet-dir\") pod \"1d6de239-aeab-4880-8086-72be45fe1cab\" (UID: \"1d6de239-aeab-4880-8086-72be45fe1cab\") " Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.322398 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6de239-aeab-4880-8086-72be45fe1cab-kube-api-access\") pod \"1d6de239-aeab-4880-8086-72be45fe1cab\" (UID: \"1d6de239-aeab-4880-8086-72be45fe1cab\") " Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.323871 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.324353 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.824338599 +0000 UTC m=+239.840421002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.324570 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d6de239-aeab-4880-8086-72be45fe1cab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1d6de239-aeab-4880-8086-72be45fe1cab" (UID: "1d6de239-aeab-4880-8086-72be45fe1cab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.331917 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6de239-aeab-4880-8086-72be45fe1cab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1d6de239-aeab-4880-8086-72be45fe1cab" (UID: "1d6de239-aeab-4880-8086-72be45fe1cab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.430607 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.430963 5029 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6de239-aeab-4880-8086-72be45fe1cab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.430984 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6de239-aeab-4880-8086-72be45fe1cab-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.431066 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:19.93104685 +0000 UTC m=+239.947129263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.488426 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:19 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:19 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:19 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.488500 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.531979 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.532302 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.032290235 +0000 UTC m=+240.048372638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.549973 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" event={"ID":"5db2bce8-6a97-4593-9780-39b314a116b2","Type":"ContainerStarted","Data":"8edc7ba8d69d47f6b1a9a91d6e5235b8cb33fdb778d74deb4daff3baf24e8bfc"} Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.551917 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6ade61b6-4b14-41e6-aef0-baf21400c50b","Type":"ContainerStarted","Data":"ea24f45a54a74749809d362d411363b3eba00912a3ad78cdf47fa73686d23ed9"} Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.556636 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.557230 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1d6de239-aeab-4880-8086-72be45fe1cab","Type":"ContainerDied","Data":"9bdb403016e900bcf0afe1e63063e644585adeb17836d480a6dcfe1fd1ac4045"} Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.557267 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bdb403016e900bcf0afe1e63063e644585adeb17836d480a6dcfe1fd1ac4045" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.564554 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" event={"ID":"9575b66a-e846-49ec-a7bb-03535765b414","Type":"ContainerStarted","Data":"e5589a8a07ffe8a14ab025d09aac637c57a620fb989f2535e7736d2a1d59842e"} Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.564590 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" event={"ID":"9575b66a-e846-49ec-a7bb-03535765b414","Type":"ContainerStarted","Data":"f9f2254bc7e11ac8359ab747480f9684389ef1a832a009ef0411573798446f83"} Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.565599 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.593189 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.593418 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" podStartSLOduration=7.593397989 podStartE2EDuration="7.593397989s" podCreationTimestamp="2026-03-13 20:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:19.592427072 +0000 UTC m=+239.608509475" watchObservedRunningTime="2026-03-13 20:31:19.593397989 +0000 UTC m=+239.609480392" Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.633666 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.634139 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.134105704 +0000 UTC m=+240.150188107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.636681 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.637296 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.137283019 +0000 UTC m=+240.153365422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.740473 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.740893 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.240872206 +0000 UTC m=+240.256954609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.842346 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.842762 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.342743498 +0000 UTC m=+240.358825901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.944792 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.944992 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.444959568 +0000 UTC m=+240.461041971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.945256 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:19 crc kubenswrapper[5029]: E0313 20:31:19.945645 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.445626146 +0000 UTC m=+240.461708549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:19 crc kubenswrapper[5029]: I0313 20:31:19.953107 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.046246 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm6ts\" (UniqueName: \"kubernetes.io/projected/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-kube-api-access-pm6ts\") pod \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.046294 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-config-volume\") pod \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.046461 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.046510 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-secret-volume\") pod \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\" (UID: \"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05\") " Mar 13 20:31:20 crc kubenswrapper[5029]: E0313 20:31:20.046776 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.546745377 +0000 UTC m=+240.562827790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.047044 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.047381 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-config-volume" (OuterVolumeSpecName: "config-volume") pod "8143251f-c7f9-42a8-a7ad-dfd9d5f87a05" (UID: "8143251f-c7f9-42a8-a7ad-dfd9d5f87a05"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:20 crc kubenswrapper[5029]: E0313 20:31:20.047485 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.547449356 +0000 UTC m=+240.563531839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.052712 5029 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.055761 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8143251f-c7f9-42a8-a7ad-dfd9d5f87a05" (UID: "8143251f-c7f9-42a8-a7ad-dfd9d5f87a05"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.060951 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-kube-api-access-pm6ts" (OuterVolumeSpecName: "kube-api-access-pm6ts") pod "8143251f-c7f9-42a8-a7ad-dfd9d5f87a05" (UID: "8143251f-c7f9-42a8-a7ad-dfd9d5f87a05"). InnerVolumeSpecName "kube-api-access-pm6ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.148460 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:20 crc kubenswrapper[5029]: E0313 20:31:20.148642 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.648607348 +0000 UTC m=+240.664689751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.148821 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.148996 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm6ts\" (UniqueName: \"kubernetes.io/projected/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-kube-api-access-pm6ts\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.149083 5029 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.149104 5029 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:20 crc kubenswrapper[5029]: E0313 20:31:20.149226 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-13 20:31:20.649215584 +0000 UTC m=+240.665298067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lbggs" (UID: "120ab712-4dde-43e5-8e14-f755accec059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.184697 5029 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-13T20:31:20.052747559Z","Handler":null,"Name":""} Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.194180 5029 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.194330 5029 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.250459 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.287762 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.357635 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.362820 5029 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.362889 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.419785 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lbggs\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.440341 5029 ???:1] "http: TLS handshake error from 192.168.126.11:41616: no serving certificate available for the kubelet" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.482191 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:20 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:20 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:20 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.482249 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.597397 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.597432 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7" event={"ID":"8143251f-c7f9-42a8-a7ad-dfd9d5f87a05","Type":"ContainerDied","Data":"393948d6eea55e97a1be91dbcc6d95607134460f62fe72255b8b267c432ebad8"} Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.597499 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="393948d6eea55e97a1be91dbcc6d95607134460f62fe72255b8b267c432ebad8" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.627809 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.636622 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.645544 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.646887 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6ade61b6-4b14-41e6-aef0-baf21400c50b","Type":"ContainerStarted","Data":"0ff51eeb523719e0bd2007b2b27ee2b0e516ee5e3e33b35a703a9589a19fe7b2"} Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.646942 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" event={"ID":"5db2bce8-6a97-4593-9780-39b314a116b2","Type":"ContainerStarted","Data":"9352481ae235a5027eab85bf48d52998946d69aadc24f201d04102d14e762921"} Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.695914 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.69129853 podStartE2EDuration="4.69129853s" podCreationTimestamp="2026-03-13 20:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:20.691225038 +0000 UTC m=+240.707307441" watchObservedRunningTime="2026-03-13 20:31:20.69129853 +0000 UTC m=+240.707380933" Mar 13 20:31:20 crc kubenswrapper[5029]: I0313 20:31:20.981122 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lbggs"] Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.491079 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:21 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:21 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:21 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.491406 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.671797 5029 generic.go:334] "Generic (PLEG): container finished" podID="6ade61b6-4b14-41e6-aef0-baf21400c50b" containerID="0ff51eeb523719e0bd2007b2b27ee2b0e516ee5e3e33b35a703a9589a19fe7b2" exitCode=0 Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.671897 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6ade61b6-4b14-41e6-aef0-baf21400c50b","Type":"ContainerDied","Data":"0ff51eeb523719e0bd2007b2b27ee2b0e516ee5e3e33b35a703a9589a19fe7b2"} Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.683292 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" event={"ID":"120ab712-4dde-43e5-8e14-f755accec059","Type":"ContainerStarted","Data":"bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c"} Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.683349 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" event={"ID":"120ab712-4dde-43e5-8e14-f755accec059","Type":"ContainerStarted","Data":"93e3f8ad2f01fe32b11beb1a17b36cd2b3232126c51abaa0a3be9f30f89bd9dc"} Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.683899 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.725928 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" event={"ID":"5db2bce8-6a97-4593-9780-39b314a116b2","Type":"ContainerStarted","Data":"658cdce511f618fce042c391835e8297cb18569826ba60db3c0ff6f43a420f6b"} Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.749210 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" podStartSLOduration=192.749182145 podStartE2EDuration="3m12.749182145s" podCreationTimestamp="2026-03-13 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:21.741234861 +0000 UTC m=+241.757317274" watchObservedRunningTime="2026-03-13 20:31:21.749182145 +0000 UTC m=+241.765264558" Mar 13 20:31:21 crc kubenswrapper[5029]: I0313 20:31:21.788016 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rjjb9" podStartSLOduration=17.787998129 podStartE2EDuration="17.787998129s" podCreationTimestamp="2026-03-13 20:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:31:21.780528078 +0000 UTC m=+241.796610501" watchObservedRunningTime="2026-03-13 20:31:21.787998129 +0000 UTC m=+241.804080532" Mar 13 20:31:21 crc kubenswrapper[5029]: E0313 20:31:21.834228 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8143251f_c7f9_42a8_a7ad_dfd9d5f87a05.slice\": RecentStats: unable to find data in memory cache]" Mar 13 20:31:22 crc kubenswrapper[5029]: I0313 20:31:22.481622 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:22 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:22 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:22 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:22 crc kubenswrapper[5029]: I0313 20:31:22.481901 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.189895 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.316120 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ade61b6-4b14-41e6-aef0-baf21400c50b-kube-api-access\") pod \"6ade61b6-4b14-41e6-aef0-baf21400c50b\" (UID: \"6ade61b6-4b14-41e6-aef0-baf21400c50b\") " Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.316171 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ade61b6-4b14-41e6-aef0-baf21400c50b-kubelet-dir\") pod \"6ade61b6-4b14-41e6-aef0-baf21400c50b\" (UID: \"6ade61b6-4b14-41e6-aef0-baf21400c50b\") " Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.316724 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ade61b6-4b14-41e6-aef0-baf21400c50b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6ade61b6-4b14-41e6-aef0-baf21400c50b" (UID: "6ade61b6-4b14-41e6-aef0-baf21400c50b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.324191 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ade61b6-4b14-41e6-aef0-baf21400c50b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6ade61b6-4b14-41e6-aef0-baf21400c50b" (UID: "6ade61b6-4b14-41e6-aef0-baf21400c50b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.418189 5029 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ade61b6-4b14-41e6-aef0-baf21400c50b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.418231 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ade61b6-4b14-41e6-aef0-baf21400c50b-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.499364 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:23 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:23 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:23 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.499446 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.769566 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6ade61b6-4b14-41e6-aef0-baf21400c50b","Type":"ContainerDied","Data":"ea24f45a54a74749809d362d411363b3eba00912a3ad78cdf47fa73686d23ed9"} Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.769616 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea24f45a54a74749809d362d411363b3eba00912a3ad78cdf47fa73686d23ed9" Mar 13 20:31:23 crc kubenswrapper[5029]: I0313 20:31:23.769685 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 13 20:31:24 crc kubenswrapper[5029]: I0313 20:31:24.483387 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:24 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:24 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:24 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:24 crc kubenswrapper[5029]: I0313 20:31:24.483439 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:25 crc kubenswrapper[5029]: I0313 20:31:25.487723 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:25 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:25 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:25 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:25 crc kubenswrapper[5029]: I0313 20:31:25.488113 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:25 crc kubenswrapper[5029]: I0313 20:31:25.996951 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dh52p" Mar 13 20:31:26 crc kubenswrapper[5029]: I0313 20:31:26.483494 5029 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 20:31:26 crc kubenswrapper[5029]: [-]has-synced failed: reason withheld Mar 13 20:31:26 crc kubenswrapper[5029]: [+]process-running ok Mar 13 20:31:26 crc kubenswrapper[5029]: healthz check failed Mar 13 20:31:26 crc kubenswrapper[5029]: I0313 20:31:26.483575 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnz" podUID="45b7cbc6-c2b2-4a7d-af59-49c2ab673dd7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:31:27 crc kubenswrapper[5029]: I0313 20:31:27.481769 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:27 crc kubenswrapper[5029]: I0313 20:31:27.485005 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h2jnz" Mar 13 20:31:27 crc kubenswrapper[5029]: I0313 20:31:27.560333 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:27 crc kubenswrapper[5029]: I0313 20:31:27.565372 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:31:27 crc kubenswrapper[5029]: I0313 20:31:27.818815 5029 patch_prober.go:28] interesting pod/downloads-7954f5f757-x5x9w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 13 20:31:27 crc kubenswrapper[5029]: I0313 20:31:27.818873 5029 patch_prober.go:28] interesting pod/downloads-7954f5f757-x5x9w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 13 20:31:27 crc kubenswrapper[5029]: I0313 20:31:27.818881 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x5x9w" podUID="55243e70-3d3c-44df-ac61-d298330ff633" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 13 20:31:27 crc kubenswrapper[5029]: I0313 20:31:27.818934 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x5x9w" podUID="55243e70-3d3c-44df-ac61-d298330ff633" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 13 20:31:31 crc kubenswrapper[5029]: I0313 20:31:31.793573 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56bf9885bd-62hzm"] Mar 13 20:31:31 crc kubenswrapper[5029]: I0313 20:31:31.793815 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" podUID="572b8404-0d6e-496e-933b-2b98551dcdcc" containerName="controller-manager" containerID="cri-o://6ed4d6a33e92249e1d67ed86c122980aeb440001b9af3d15ffed8407edc1aac2" gracePeriod=30 Mar 13 20:31:31 crc kubenswrapper[5029]: I0313 20:31:31.816815 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q"] Mar 13 20:31:31 crc kubenswrapper[5029]: I0313 20:31:31.817102 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" podUID="9575b66a-e846-49ec-a7bb-03535765b414" containerName="route-controller-manager" containerID="cri-o://e5589a8a07ffe8a14ab025d09aac637c57a620fb989f2535e7736d2a1d59842e" gracePeriod=30 Mar 13 20:31:31 crc kubenswrapper[5029]: I0313 20:31:31.949984 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:31:31 crc kubenswrapper[5029]: I0313 20:31:31.950390 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:31:32 crc kubenswrapper[5029]: E0313 20:31:32.011416 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8143251f_c7f9_42a8_a7ad_dfd9d5f87a05.slice\": RecentStats: unable to find data in memory cache]" Mar 13 20:31:32 crc kubenswrapper[5029]: I0313 20:31:32.868098 5029 generic.go:334] "Generic (PLEG): container finished" podID="572b8404-0d6e-496e-933b-2b98551dcdcc" containerID="6ed4d6a33e92249e1d67ed86c122980aeb440001b9af3d15ffed8407edc1aac2" exitCode=0 Mar 13 20:31:32 crc kubenswrapper[5029]: I0313 20:31:32.868190 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" event={"ID":"572b8404-0d6e-496e-933b-2b98551dcdcc","Type":"ContainerDied","Data":"6ed4d6a33e92249e1d67ed86c122980aeb440001b9af3d15ffed8407edc1aac2"} Mar 13 20:31:32 crc kubenswrapper[5029]: I0313 20:31:32.871555 5029 generic.go:334] "Generic (PLEG): container finished" podID="9575b66a-e846-49ec-a7bb-03535765b414" containerID="e5589a8a07ffe8a14ab025d09aac637c57a620fb989f2535e7736d2a1d59842e" exitCode=0 Mar 13 20:31:32 crc kubenswrapper[5029]: I0313 20:31:32.871614 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" event={"ID":"9575b66a-e846-49ec-a7bb-03535765b414","Type":"ContainerDied","Data":"e5589a8a07ffe8a14ab025d09aac637c57a620fb989f2535e7736d2a1d59842e"} Mar 13 20:31:34 crc kubenswrapper[5029]: I0313 20:31:34.526282 5029 patch_prober.go:28] interesting pod/controller-manager-56bf9885bd-62hzm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" start-of-body= Mar 13 20:31:34 crc kubenswrapper[5029]: I0313 20:31:34.526631 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" podUID="572b8404-0d6e-496e-933b-2b98551dcdcc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" Mar 13 20:31:37 crc kubenswrapper[5029]: I0313 20:31:37.446499 5029 patch_prober.go:28] interesting pod/route-controller-manager-768bcf944c-nv55q container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 20:31:37 crc kubenswrapper[5029]: I0313 20:31:37.446810 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" podUID="9575b66a-e846-49ec-a7bb-03535765b414" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 20:31:37 crc kubenswrapper[5029]: I0313 20:31:37.838543 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-x5x9w" Mar 13 20:31:40 crc kubenswrapper[5029]: I0313 20:31:40.645126 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:31:40 crc kubenswrapper[5029]: I0313 20:31:40.945026 5029 ???:1] "http: TLS handshake error from 192.168.126.11:40496: no serving certificate available for the kubelet" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.524561 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553006 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5"] Mar 13 20:31:41 crc kubenswrapper[5029]: E0313 20:31:41.553287 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9575b66a-e846-49ec-a7bb-03535765b414" containerName="route-controller-manager" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553304 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9575b66a-e846-49ec-a7bb-03535765b414" containerName="route-controller-manager" Mar 13 20:31:41 crc kubenswrapper[5029]: E0313 20:31:41.553315 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8143251f-c7f9-42a8-a7ad-dfd9d5f87a05" containerName="collect-profiles" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553324 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8143251f-c7f9-42a8-a7ad-dfd9d5f87a05" containerName="collect-profiles" Mar 13 20:31:41 crc kubenswrapper[5029]: E0313 20:31:41.553347 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ade61b6-4b14-41e6-aef0-baf21400c50b" containerName="pruner" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553357 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ade61b6-4b14-41e6-aef0-baf21400c50b" containerName="pruner" Mar 13 20:31:41 crc kubenswrapper[5029]: E0313 20:31:41.553374 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6de239-aeab-4880-8086-72be45fe1cab" containerName="pruner" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553381 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6de239-aeab-4880-8086-72be45fe1cab" containerName="pruner" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553534 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9575b66a-e846-49ec-a7bb-03535765b414" containerName="route-controller-manager" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553549 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8143251f-c7f9-42a8-a7ad-dfd9d5f87a05" containerName="collect-profiles" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553563 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ade61b6-4b14-41e6-aef0-baf21400c50b" containerName="pruner" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.553577 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d6de239-aeab-4880-8086-72be45fe1cab" containerName="pruner" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.554022 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.566942 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5"] Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.598520 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.598566 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q" event={"ID":"9575b66a-e846-49ec-a7bb-03535765b414","Type":"ContainerDied","Data":"f9f2254bc7e11ac8359ab747480f9684389ef1a832a009ef0411573798446f83"} Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.598645 5029 scope.go:117] "RemoveContainer" containerID="e5589a8a07ffe8a14ab025d09aac637c57a620fb989f2535e7736d2a1d59842e" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.688530 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-client-ca\") pod \"9575b66a-e846-49ec-a7bb-03535765b414\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.688628 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9575b66a-e846-49ec-a7bb-03535765b414-serving-cert\") pod \"9575b66a-e846-49ec-a7bb-03535765b414\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.688702 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrh8n\" (UniqueName: \"kubernetes.io/projected/9575b66a-e846-49ec-a7bb-03535765b414-kube-api-access-zrh8n\") pod \"9575b66a-e846-49ec-a7bb-03535765b414\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.688752 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-config\") pod \"9575b66a-e846-49ec-a7bb-03535765b414\" (UID: \"9575b66a-e846-49ec-a7bb-03535765b414\") " Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.689525 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-client-ca" (OuterVolumeSpecName: "client-ca") pod "9575b66a-e846-49ec-a7bb-03535765b414" (UID: "9575b66a-e846-49ec-a7bb-03535765b414"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.689612 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-config" (OuterVolumeSpecName: "config") pod "9575b66a-e846-49ec-a7bb-03535765b414" (UID: "9575b66a-e846-49ec-a7bb-03535765b414"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.689801 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-client-ca\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.689871 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqcb\" (UniqueName: \"kubernetes.io/projected/99b29f44-9606-4aaf-b2ec-5f92286ae70b-kube-api-access-vwqcb\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.689905 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b29f44-9606-4aaf-b2ec-5f92286ae70b-serving-cert\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.689948 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-config\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.690007 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.690019 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9575b66a-e846-49ec-a7bb-03535765b414-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.694438 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9575b66a-e846-49ec-a7bb-03535765b414-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9575b66a-e846-49ec-a7bb-03535765b414" (UID: "9575b66a-e846-49ec-a7bb-03535765b414"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.695441 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9575b66a-e846-49ec-a7bb-03535765b414-kube-api-access-zrh8n" (OuterVolumeSpecName: "kube-api-access-zrh8n") pod "9575b66a-e846-49ec-a7bb-03535765b414" (UID: "9575b66a-e846-49ec-a7bb-03535765b414"). InnerVolumeSpecName "kube-api-access-zrh8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.790945 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-client-ca\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.791028 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqcb\" (UniqueName: \"kubernetes.io/projected/99b29f44-9606-4aaf-b2ec-5f92286ae70b-kube-api-access-vwqcb\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.791065 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b29f44-9606-4aaf-b2ec-5f92286ae70b-serving-cert\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.791107 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-config\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.791154 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrh8n\" (UniqueName: \"kubernetes.io/projected/9575b66a-e846-49ec-a7bb-03535765b414-kube-api-access-zrh8n\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.791167 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9575b66a-e846-49ec-a7bb-03535765b414-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.792455 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-config\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.793568 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-client-ca\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.798796 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b29f44-9606-4aaf-b2ec-5f92286ae70b-serving-cert\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.808784 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqcb\" (UniqueName: \"kubernetes.io/projected/99b29f44-9606-4aaf-b2ec-5f92286ae70b-kube-api-access-vwqcb\") pod \"route-controller-manager-5589d879d-8s9t5\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.881948 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.926778 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q"] Mar 13 20:31:41 crc kubenswrapper[5029]: I0313 20:31:41.929816 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768bcf944c-nv55q"] Mar 13 20:31:42 crc kubenswrapper[5029]: E0313 20:31:42.126621 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8143251f_c7f9_42a8_a7ad_dfd9d5f87a05.slice\": RecentStats: unable to find data in memory cache]" Mar 13 20:31:44 crc kubenswrapper[5029]: I0313 20:31:42.610694 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9575b66a-e846-49ec-a7bb-03535765b414" path="/var/lib/kubelet/pods/9575b66a-e846-49ec-a7bb-03535765b414/volumes" Mar 13 20:31:45 crc kubenswrapper[5029]: I0313 20:31:45.535658 5029 patch_prober.go:28] interesting pod/controller-manager-56bf9885bd-62hzm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 20:31:45 crc kubenswrapper[5029]: I0313 20:31:45.536045 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" podUID="572b8404-0d6e-496e-933b-2b98551dcdcc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 20:31:46 crc kubenswrapper[5029]: I0313 20:31:46.990827 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 13 20:31:46 crc kubenswrapper[5029]: I0313 20:31:46.992160 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:31:46 crc kubenswrapper[5029]: I0313 20:31:46.994321 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 20:31:46 crc kubenswrapper[5029]: I0313 20:31:46.996449 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 13 20:31:46 crc kubenswrapper[5029]: I0313 20:31:46.997948 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 13 20:31:47 crc kubenswrapper[5029]: I0313 20:31:47.087656 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3512168b-da6f-49b8-8f87-501a62256fba-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3512168b-da6f-49b8-8f87-501a62256fba\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:31:47 crc kubenswrapper[5029]: I0313 20:31:47.087925 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3512168b-da6f-49b8-8f87-501a62256fba-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3512168b-da6f-49b8-8f87-501a62256fba\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:31:47 crc kubenswrapper[5029]: I0313 20:31:47.188653 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3512168b-da6f-49b8-8f87-501a62256fba-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3512168b-da6f-49b8-8f87-501a62256fba\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:31:47 crc kubenswrapper[5029]: I0313 20:31:47.188701 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3512168b-da6f-49b8-8f87-501a62256fba-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3512168b-da6f-49b8-8f87-501a62256fba\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:31:47 crc kubenswrapper[5029]: I0313 20:31:47.188796 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3512168b-da6f-49b8-8f87-501a62256fba-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3512168b-da6f-49b8-8f87-501a62256fba\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:31:47 crc kubenswrapper[5029]: I0313 20:31:47.209358 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3512168b-da6f-49b8-8f87-501a62256fba-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3512168b-da6f-49b8-8f87-501a62256fba\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:31:47 crc kubenswrapper[5029]: I0313 20:31:47.343361 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:31:47 crc kubenswrapper[5029]: I0313 20:31:47.474508 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cb72p" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.707969 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.733825 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9b774699f-fc5rs"] Mar 13 20:31:49 crc kubenswrapper[5029]: E0313 20:31:49.734300 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572b8404-0d6e-496e-933b-2b98551dcdcc" containerName="controller-manager" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.734411 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="572b8404-0d6e-496e-933b-2b98551dcdcc" containerName="controller-manager" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.734594 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="572b8404-0d6e-496e-933b-2b98551dcdcc" containerName="controller-manager" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.735061 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.754287 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b774699f-fc5rs"] Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.834696 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-config\") pod \"572b8404-0d6e-496e-933b-2b98551dcdcc\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.834749 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-client-ca\") pod \"572b8404-0d6e-496e-933b-2b98551dcdcc\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.834801 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx56r\" (UniqueName: \"kubernetes.io/projected/572b8404-0d6e-496e-933b-2b98551dcdcc-kube-api-access-xx56r\") pod \"572b8404-0d6e-496e-933b-2b98551dcdcc\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.834827 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572b8404-0d6e-496e-933b-2b98551dcdcc-serving-cert\") pod \"572b8404-0d6e-496e-933b-2b98551dcdcc\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.834918 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-proxy-ca-bundles\") pod \"572b8404-0d6e-496e-933b-2b98551dcdcc\" (UID: \"572b8404-0d6e-496e-933b-2b98551dcdcc\") " Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.835095 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-proxy-ca-bundles\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.835133 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-client-ca\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.835155 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-config\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.835182 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5hr\" (UniqueName: \"kubernetes.io/projected/10a54eb1-12b9-4aeb-92b2-102259c87db2-kube-api-access-rs5hr\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.835211 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10a54eb1-12b9-4aeb-92b2-102259c87db2-serving-cert\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.835703 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-client-ca" (OuterVolumeSpecName: "client-ca") pod "572b8404-0d6e-496e-933b-2b98551dcdcc" (UID: "572b8404-0d6e-496e-933b-2b98551dcdcc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.835815 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-config" (OuterVolumeSpecName: "config") pod "572b8404-0d6e-496e-933b-2b98551dcdcc" (UID: "572b8404-0d6e-496e-933b-2b98551dcdcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.835916 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "572b8404-0d6e-496e-933b-2b98551dcdcc" (UID: "572b8404-0d6e-496e-933b-2b98551dcdcc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.838185 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572b8404-0d6e-496e-933b-2b98551dcdcc-kube-api-access-xx56r" (OuterVolumeSpecName: "kube-api-access-xx56r") pod "572b8404-0d6e-496e-933b-2b98551dcdcc" (UID: "572b8404-0d6e-496e-933b-2b98551dcdcc"). InnerVolumeSpecName "kube-api-access-xx56r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.839326 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572b8404-0d6e-496e-933b-2b98551dcdcc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "572b8404-0d6e-496e-933b-2b98551dcdcc" (UID: "572b8404-0d6e-496e-933b-2b98551dcdcc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.947433 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-client-ca\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.947529 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-config\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.947613 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5hr\" (UniqueName: \"kubernetes.io/projected/10a54eb1-12b9-4aeb-92b2-102259c87db2-kube-api-access-rs5hr\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.947682 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10a54eb1-12b9-4aeb-92b2-102259c87db2-serving-cert\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.947973 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-proxy-ca-bundles\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.948060 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.948079 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.948090 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/572b8404-0d6e-496e-933b-2b98551dcdcc-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.948103 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx56r\" (UniqueName: \"kubernetes.io/projected/572b8404-0d6e-496e-933b-2b98551dcdcc-kube-api-access-xx56r\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.948115 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572b8404-0d6e-496e-933b-2b98551dcdcc-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.948659 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-client-ca\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.949407 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-proxy-ca-bundles\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.949526 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-config\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.953750 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10a54eb1-12b9-4aeb-92b2-102259c87db2-serving-cert\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:49 crc kubenswrapper[5029]: I0313 20:31:49.966352 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5hr\" (UniqueName: \"kubernetes.io/projected/10a54eb1-12b9-4aeb-92b2-102259c87db2-kube-api-access-rs5hr\") pod \"controller-manager-9b774699f-fc5rs\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:50 crc kubenswrapper[5029]: I0313 20:31:50.057762 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:31:50 crc kubenswrapper[5029]: I0313 20:31:50.356756 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 13 20:31:50 crc kubenswrapper[5029]: I0313 20:31:50.578316 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" event={"ID":"572b8404-0d6e-496e-933b-2b98551dcdcc","Type":"ContainerDied","Data":"77b4148b9872ffc9717895702212c68975a97287f471afcc6e8df812169ab725"} Mar 13 20:31:50 crc kubenswrapper[5029]: I0313 20:31:50.578480 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56bf9885bd-62hzm" Mar 13 20:31:50 crc kubenswrapper[5029]: I0313 20:31:50.611246 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56bf9885bd-62hzm"] Mar 13 20:31:50 crc kubenswrapper[5029]: I0313 20:31:50.611311 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56bf9885bd-62hzm"] Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.181934 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.182627 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.198032 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.371686 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-var-lock\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.372016 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.372178 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5457502b-4e3f-463b-87ae-4013109d2298-kube-api-access\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.474172 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-var-lock\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.474536 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.474619 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.474292 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-var-lock\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.474910 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5457502b-4e3f-463b-87ae-4013109d2298-kube-api-access\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.496934 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5457502b-4e3f-463b-87ae-4013109d2298-kube-api-access\") pod \"installer-9-crc\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.596022 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.807363 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9b774699f-fc5rs"] Mar 13 20:31:51 crc kubenswrapper[5029]: I0313 20:31:51.899081 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5"] Mar 13 20:31:52 crc kubenswrapper[5029]: E0313 20:31:52.264177 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8143251f_c7f9_42a8_a7ad_dfd9d5f87a05.slice\": RecentStats: unable to find data in memory cache]" Mar 13 20:31:52 crc kubenswrapper[5029]: I0313 20:31:52.611977 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572b8404-0d6e-496e-933b-2b98551dcdcc" path="/var/lib/kubelet/pods/572b8404-0d6e-496e-933b-2b98551dcdcc/volumes" Mar 13 20:32:00 crc kubenswrapper[5029]: I0313 20:32:00.146539 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557232-9pmbr"] Mar 13 20:32:00 crc kubenswrapper[5029]: I0313 20:32:00.148441 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" Mar 13 20:32:00 crc kubenswrapper[5029]: I0313 20:32:00.153778 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557232-9pmbr"] Mar 13 20:32:00 crc kubenswrapper[5029]: I0313 20:32:00.155323 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:32:00 crc kubenswrapper[5029]: I0313 20:32:00.348272 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6mm\" (UniqueName: \"kubernetes.io/projected/9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d-kube-api-access-rn6mm\") pod \"auto-csr-approver-29557232-9pmbr\" (UID: \"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d\") " pod="openshift-infra/auto-csr-approver-29557232-9pmbr" Mar 13 20:32:00 crc kubenswrapper[5029]: I0313 20:32:00.449906 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6mm\" (UniqueName: \"kubernetes.io/projected/9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d-kube-api-access-rn6mm\") pod \"auto-csr-approver-29557232-9pmbr\" (UID: \"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d\") " pod="openshift-infra/auto-csr-approver-29557232-9pmbr" Mar 13 20:32:00 crc kubenswrapper[5029]: I0313 20:32:00.479947 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6mm\" (UniqueName: \"kubernetes.io/projected/9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d-kube-api-access-rn6mm\") pod \"auto-csr-approver-29557232-9pmbr\" (UID: \"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d\") " pod="openshift-infra/auto-csr-approver-29557232-9pmbr" Mar 13 20:32:00 crc kubenswrapper[5029]: I0313 20:32:00.767780 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" Mar 13 20:32:01 crc kubenswrapper[5029]: I0313 20:32:01.950435 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:32:01 crc kubenswrapper[5029]: I0313 20:32:01.951350 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:32:02 crc kubenswrapper[5029]: E0313 20:32:02.397113 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8143251f_c7f9_42a8_a7ad_dfd9d5f87a05.slice\": RecentStats: unable to find data in memory cache]" Mar 13 20:32:09 crc kubenswrapper[5029]: E0313 20:32:09.882942 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 13 20:32:09 crc kubenswrapper[5029]: E0313 20:32:09.883682 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgj84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5jkhw_openshift-marketplace(9d4a1347-08c4-42b0-9fb6-268fdc83147f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 20:32:09 crc kubenswrapper[5029]: E0313 20:32:09.885090 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5jkhw" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" Mar 13 20:32:12 crc kubenswrapper[5029]: E0313 20:32:12.035250 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5jkhw" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" Mar 13 20:32:12 crc kubenswrapper[5029]: E0313 20:32:12.503591 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8143251f_c7f9_42a8_a7ad_dfd9d5f87a05.slice\": RecentStats: unable to find data in memory cache]" Mar 13 20:32:17 crc kubenswrapper[5029]: E0313 20:32:17.102062 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 13 20:32:17 crc kubenswrapper[5029]: E0313 20:32:17.102486 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66klh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qz4wv_openshift-marketplace(e2f9d5d5-9771-4294-961f-110aa2430e29): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 20:32:17 crc kubenswrapper[5029]: E0313 20:32:17.103696 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qz4wv" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" Mar 13 20:32:18 crc kubenswrapper[5029]: E0313 20:32:18.036716 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 13 20:32:18 crc kubenswrapper[5029]: E0313 20:32:18.036928 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qk6pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kl2lj_openshift-marketplace(e33b18fb-9cd7-4c30-bdb0-402734c47cc8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 20:32:18 crc kubenswrapper[5029]: E0313 20:32:18.038110 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kl2lj" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" Mar 13 20:32:18 crc kubenswrapper[5029]: E0313 20:32:18.706497 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qz4wv" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" Mar 13 20:32:21 crc kubenswrapper[5029]: E0313 20:32:21.713310 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 13 20:32:21 crc kubenswrapper[5029]: E0313 20:32:21.713835 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xv76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2xlnz_openshift-marketplace(553bdc43-797f-401f-9ca0-875060ab0553): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 20:32:21 crc kubenswrapper[5029]: E0313 20:32:21.715230 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2xlnz" podUID="553bdc43-797f-401f-9ca0-875060ab0553" Mar 13 20:32:21 crc kubenswrapper[5029]: I0313 20:32:21.928277 5029 ???:1] "http: TLS handshake error from 192.168.126.11:37994: no serving certificate available for the kubelet" Mar 13 20:32:22 crc kubenswrapper[5029]: E0313 20:32:22.734156 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2xlnz" podUID="553bdc43-797f-401f-9ca0-875060ab0553" Mar 13 20:32:22 crc kubenswrapper[5029]: E0313 20:32:22.734177 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kl2lj" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" Mar 13 20:32:22 crc kubenswrapper[5029]: E0313 20:32:22.979893 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 13 20:32:22 crc kubenswrapper[5029]: E0313 20:32:22.980052 5029 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 20:32:22 crc kubenswrapper[5029]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 13 20:32:22 crc kubenswrapper[5029]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jh25b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29557230-trnjq_openshift-infra(5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 13 20:32:22 crc kubenswrapper[5029]: > logger="UnhandledError" Mar 13 20:32:22 crc kubenswrapper[5029]: E0313 20:32:22.981169 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29557230-trnjq" podUID="5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd" Mar 13 20:32:23 crc kubenswrapper[5029]: E0313 20:32:23.834621 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29557230-trnjq" podUID="5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd" Mar 13 20:32:26 crc kubenswrapper[5029]: I0313 20:32:26.513124 5029 scope.go:117] "RemoveContainer" containerID="6ed4d6a33e92249e1d67ed86c122980aeb440001b9af3d15ffed8407edc1aac2" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.599590 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.600223 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6k4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dhg5r_openshift-marketplace(866c95e1-566b-4e67-8822-b6c182cb3378): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.601394 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dhg5r" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.606926 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.607074 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6cd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vpzl2_openshift-marketplace(5760820d-9df0-4f3e-b14f-1c64e2607ecd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.608725 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vpzl2" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.636538 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.636774 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tnhl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-494x8_openshift-marketplace(3c8fadb2-962e-4bca-8305-a51b8d2334bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.638366 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-494x8" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.659427 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.660131 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz67n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s58vt_openshift-marketplace(151390c1-ebb0-49bf-be99-3326fc839781): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.661394 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s58vt" podUID="151390c1-ebb0-49bf-be99-3326fc839781" Mar 13 20:32:26 crc kubenswrapper[5029]: I0313 20:32:26.856419 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.860262 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-494x8" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.860408 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s58vt" podUID="151390c1-ebb0-49bf-be99-3326fc839781" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.861959 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dhg5r" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" Mar 13 20:32:26 crc kubenswrapper[5029]: E0313 20:32:26.862040 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vpzl2" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.003205 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.011324 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5"] Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.119998 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557232-9pmbr"] Mar 13 20:32:27 crc kubenswrapper[5029]: W0313 20:32:27.132520 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bfdd95b_3452_4c9b_9df1_8a3ee4c43a3d.slice/crio-9e6b83bf4221f31bf74442320b6e6ebbab005b8568147f5a0a58909e2e4a56b6 WatchSource:0}: Error finding container 9e6b83bf4221f31bf74442320b6e6ebbab005b8568147f5a0a58909e2e4a56b6: Status 404 returned error can't find the container with id 9e6b83bf4221f31bf74442320b6e6ebbab005b8568147f5a0a58909e2e4a56b6 Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.139782 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9b774699f-fc5rs"] Mar 13 20:32:27 crc kubenswrapper[5029]: W0313 20:32:27.143235 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a54eb1_12b9_4aeb_92b2_102259c87db2.slice/crio-6b5507445b1ea01a8c4bd6f6ff41b1dde75615bca0ba9eaf0884db156cccdedb WatchSource:0}: Error finding container 6b5507445b1ea01a8c4bd6f6ff41b1dde75615bca0ba9eaf0884db156cccdedb: Status 404 returned error can't find the container with id 6b5507445b1ea01a8c4bd6f6ff41b1dde75615bca0ba9eaf0884db156cccdedb Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.863024 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" event={"ID":"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d","Type":"ContainerStarted","Data":"9e6b83bf4221f31bf74442320b6e6ebbab005b8568147f5a0a58909e2e4a56b6"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.869099 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" event={"ID":"10a54eb1-12b9-4aeb-92b2-102259c87db2","Type":"ContainerStarted","Data":"98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.869158 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" event={"ID":"10a54eb1-12b9-4aeb-92b2-102259c87db2","Type":"ContainerStarted","Data":"6b5507445b1ea01a8c4bd6f6ff41b1dde75615bca0ba9eaf0884db156cccdedb"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.869303 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" podUID="10a54eb1-12b9-4aeb-92b2-102259c87db2" containerName="controller-manager" containerID="cri-o://98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55" gracePeriod=30 Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.869546 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.874375 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.875455 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" event={"ID":"99b29f44-9606-4aaf-b2ec-5f92286ae70b","Type":"ContainerStarted","Data":"51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.875512 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" event={"ID":"99b29f44-9606-4aaf-b2ec-5f92286ae70b","Type":"ContainerStarted","Data":"8892e460c719e4877db8f7ff846693834edff90b9d71ede1830d7c9f5c5bdc18"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.875659 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" podUID="99b29f44-9606-4aaf-b2ec-5f92286ae70b" containerName="route-controller-manager" containerID="cri-o://51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05" gracePeriod=30 Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.876261 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.883169 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5457502b-4e3f-463b-87ae-4013109d2298","Type":"ContainerStarted","Data":"f2257a9df9be6d1dbecd60af052897fd368b4587b13b4ee3880b732b37447ded"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.883223 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5457502b-4e3f-463b-87ae-4013109d2298","Type":"ContainerStarted","Data":"71fc6799eb91cd8acd52b4e9ea3b5363302e2caa58e161e0b7711bf967f0df1d"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.885911 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3512168b-da6f-49b8-8f87-501a62256fba","Type":"ContainerStarted","Data":"288e6714cbf00da70ce73d7a81f46f71ff09f62b618fd32a81ff488830f84d77"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.885952 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3512168b-da6f-49b8-8f87-501a62256fba","Type":"ContainerStarted","Data":"0f1d7d83daec53f1cdd1726d889e64688786d7821e47b8a645ba54b242c9e9cc"} Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.886669 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.893374 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" podStartSLOduration=56.893357868 podStartE2EDuration="56.893357868s" podCreationTimestamp="2026-03-13 20:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:32:27.889579266 +0000 UTC m=+307.905661679" watchObservedRunningTime="2026-03-13 20:32:27.893357868 +0000 UTC m=+307.909440271" Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.910681 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" podStartSLOduration=56.910662954 podStartE2EDuration="56.910662954s" podCreationTimestamp="2026-03-13 20:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:32:27.908424824 +0000 UTC m=+307.924507237" watchObservedRunningTime="2026-03-13 20:32:27.910662954 +0000 UTC m=+307.926745357" Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.931984 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=41.931968187 podStartE2EDuration="41.931968187s" podCreationTimestamp="2026-03-13 20:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:32:27.930807276 +0000 UTC m=+307.946889699" watchObservedRunningTime="2026-03-13 20:32:27.931968187 +0000 UTC m=+307.948050590" Mar 13 20:32:27 crc kubenswrapper[5029]: I0313 20:32:27.970767 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=36.97074874 podStartE2EDuration="36.97074874s" podCreationTimestamp="2026-03-13 20:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:32:27.968323525 +0000 UTC m=+307.984405928" watchObservedRunningTime="2026-03-13 20:32:27.97074874 +0000 UTC m=+307.986831153" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.763866 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.770302 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.794731 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b"] Mar 13 20:32:28 crc kubenswrapper[5029]: E0313 20:32:28.795063 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99b29f44-9606-4aaf-b2ec-5f92286ae70b" containerName="route-controller-manager" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.795083 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="99b29f44-9606-4aaf-b2ec-5f92286ae70b" containerName="route-controller-manager" Mar 13 20:32:28 crc kubenswrapper[5029]: E0313 20:32:28.795096 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a54eb1-12b9-4aeb-92b2-102259c87db2" containerName="controller-manager" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.795106 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a54eb1-12b9-4aeb-92b2-102259c87db2" containerName="controller-manager" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.795233 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="99b29f44-9606-4aaf-b2ec-5f92286ae70b" containerName="route-controller-manager" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.795248 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a54eb1-12b9-4aeb-92b2-102259c87db2" containerName="controller-manager" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.795776 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.801131 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b"] Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.810825 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwqcb\" (UniqueName: \"kubernetes.io/projected/99b29f44-9606-4aaf-b2ec-5f92286ae70b-kube-api-access-vwqcb\") pod \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.812447 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-client-ca\") pod \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.812488 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b29f44-9606-4aaf-b2ec-5f92286ae70b-serving-cert\") pod \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.812515 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-config\") pod \"10a54eb1-12b9-4aeb-92b2-102259c87db2\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.813087 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-client-ca\") pod \"10a54eb1-12b9-4aeb-92b2-102259c87db2\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.813135 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-config\") pod \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\" (UID: \"99b29f44-9606-4aaf-b2ec-5f92286ae70b\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.813135 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-client-ca" (OuterVolumeSpecName: "client-ca") pod "99b29f44-9606-4aaf-b2ec-5f92286ae70b" (UID: "99b29f44-9606-4aaf-b2ec-5f92286ae70b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.813246 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-config" (OuterVolumeSpecName: "config") pod "10a54eb1-12b9-4aeb-92b2-102259c87db2" (UID: "10a54eb1-12b9-4aeb-92b2-102259c87db2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.813492 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-client-ca" (OuterVolumeSpecName: "client-ca") pod "10a54eb1-12b9-4aeb-92b2-102259c87db2" (UID: "10a54eb1-12b9-4aeb-92b2-102259c87db2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.813789 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-config" (OuterVolumeSpecName: "config") pod "99b29f44-9606-4aaf-b2ec-5f92286ae70b" (UID: "99b29f44-9606-4aaf-b2ec-5f92286ae70b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.816772 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a54eb1-12b9-4aeb-92b2-102259c87db2-kube-api-access-rs5hr" (OuterVolumeSpecName: "kube-api-access-rs5hr") pod "10a54eb1-12b9-4aeb-92b2-102259c87db2" (UID: "10a54eb1-12b9-4aeb-92b2-102259c87db2"). InnerVolumeSpecName "kube-api-access-rs5hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.817091 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b29f44-9606-4aaf-b2ec-5f92286ae70b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99b29f44-9606-4aaf-b2ec-5f92286ae70b" (UID: "99b29f44-9606-4aaf-b2ec-5f92286ae70b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.817158 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs5hr\" (UniqueName: \"kubernetes.io/projected/10a54eb1-12b9-4aeb-92b2-102259c87db2-kube-api-access-rs5hr\") pod \"10a54eb1-12b9-4aeb-92b2-102259c87db2\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.817228 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-proxy-ca-bundles\") pod \"10a54eb1-12b9-4aeb-92b2-102259c87db2\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.817250 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10a54eb1-12b9-4aeb-92b2-102259c87db2-serving-cert\") pod \"10a54eb1-12b9-4aeb-92b2-102259c87db2\" (UID: \"10a54eb1-12b9-4aeb-92b2-102259c87db2\") " Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.819377 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "10a54eb1-12b9-4aeb-92b2-102259c87db2" (UID: "10a54eb1-12b9-4aeb-92b2-102259c87db2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.819916 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.819974 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b29f44-9606-4aaf-b2ec-5f92286ae70b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.819985 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.819994 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.820003 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b29f44-9606-4aaf-b2ec-5f92286ae70b-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.820015 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs5hr\" (UniqueName: \"kubernetes.io/projected/10a54eb1-12b9-4aeb-92b2-102259c87db2-kube-api-access-rs5hr\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.820022 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10a54eb1-12b9-4aeb-92b2-102259c87db2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.820553 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b29f44-9606-4aaf-b2ec-5f92286ae70b-kube-api-access-vwqcb" (OuterVolumeSpecName: "kube-api-access-vwqcb") pod "99b29f44-9606-4aaf-b2ec-5f92286ae70b" (UID: "99b29f44-9606-4aaf-b2ec-5f92286ae70b"). InnerVolumeSpecName "kube-api-access-vwqcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.829035 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a54eb1-12b9-4aeb-92b2-102259c87db2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "10a54eb1-12b9-4aeb-92b2-102259c87db2" (UID: "10a54eb1-12b9-4aeb-92b2-102259c87db2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.893063 5029 generic.go:334] "Generic (PLEG): container finished" podID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerID="0db8189b37b301bd214a8dae0bd353f87272f3a26b057bb1280193100c850993" exitCode=0 Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.893127 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jkhw" event={"ID":"9d4a1347-08c4-42b0-9fb6-268fdc83147f","Type":"ContainerDied","Data":"0db8189b37b301bd214a8dae0bd353f87272f3a26b057bb1280193100c850993"} Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.897424 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" event={"ID":"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d","Type":"ContainerStarted","Data":"6f03dc2e7a7ff9634559dade79a1341b394c88eea7ed16a2dfdaf5f5785d5647"} Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.899906 5029 generic.go:334] "Generic (PLEG): container finished" podID="10a54eb1-12b9-4aeb-92b2-102259c87db2" containerID="98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55" exitCode=0 Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.899934 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.899999 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" event={"ID":"10a54eb1-12b9-4aeb-92b2-102259c87db2","Type":"ContainerDied","Data":"98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55"} Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.900034 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b774699f-fc5rs" event={"ID":"10a54eb1-12b9-4aeb-92b2-102259c87db2","Type":"ContainerDied","Data":"6b5507445b1ea01a8c4bd6f6ff41b1dde75615bca0ba9eaf0884db156cccdedb"} Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.900055 5029 scope.go:117] "RemoveContainer" containerID="98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.910992 5029 generic.go:334] "Generic (PLEG): container finished" podID="99b29f44-9606-4aaf-b2ec-5f92286ae70b" containerID="51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05" exitCode=0 Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.911051 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" event={"ID":"99b29f44-9606-4aaf-b2ec-5f92286ae70b","Type":"ContainerDied","Data":"51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05"} Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.911077 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" event={"ID":"99b29f44-9606-4aaf-b2ec-5f92286ae70b","Type":"ContainerDied","Data":"8892e460c719e4877db8f7ff846693834edff90b9d71ede1830d7c9f5c5bdc18"} Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.911127 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.916384 5029 generic.go:334] "Generic (PLEG): container finished" podID="3512168b-da6f-49b8-8f87-501a62256fba" containerID="288e6714cbf00da70ce73d7a81f46f71ff09f62b618fd32a81ff488830f84d77" exitCode=0 Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.916490 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3512168b-da6f-49b8-8f87-501a62256fba","Type":"ContainerDied","Data":"288e6714cbf00da70ce73d7a81f46f71ff09f62b618fd32a81ff488830f84d77"} Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.924686 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-client-ca\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.924754 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g262x\" (UniqueName: \"kubernetes.io/projected/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-kube-api-access-g262x\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.924802 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-serving-cert\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.924907 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-config\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.924972 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwqcb\" (UniqueName: \"kubernetes.io/projected/99b29f44-9606-4aaf-b2ec-5f92286ae70b-kube-api-access-vwqcb\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.924988 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10a54eb1-12b9-4aeb-92b2-102259c87db2-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.937715 5029 scope.go:117] "RemoveContainer" containerID="98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55" Mar 13 20:32:28 crc kubenswrapper[5029]: E0313 20:32:28.938348 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55\": container with ID starting with 98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55 not found: ID does not exist" containerID="98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.938385 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55"} err="failed to get container status \"98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55\": rpc error: code = NotFound desc = could not find container \"98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55\": container with ID starting with 98a31f73ab88afc3f37ba373498a72b7386b3f8d22a3cee71f8d2449c2893a55 not found: ID does not exist" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.938408 5029 scope.go:117] "RemoveContainer" containerID="51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.952532 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" podStartSLOduration=27.802698667 podStartE2EDuration="28.952512426s" podCreationTimestamp="2026-03-13 20:32:00 +0000 UTC" firstStartedPulling="2026-03-13 20:32:27.137471639 +0000 UTC m=+307.153554042" lastFinishedPulling="2026-03-13 20:32:28.287285398 +0000 UTC m=+308.303367801" observedRunningTime="2026-03-13 20:32:28.947577934 +0000 UTC m=+308.963660337" watchObservedRunningTime="2026-03-13 20:32:28.952512426 +0000 UTC m=+308.968594849" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.963261 5029 csr.go:261] certificate signing request csr-rnd5f is approved, waiting to be issued Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.967740 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5"] Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.970029 5029 csr.go:257] certificate signing request csr-rnd5f is issued Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.972978 5029 scope.go:117] "RemoveContainer" containerID="51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05" Mar 13 20:32:28 crc kubenswrapper[5029]: E0313 20:32:28.973410 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05\": container with ID starting with 51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05 not found: ID does not exist" containerID="51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.973441 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05"} err="failed to get container status \"51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05\": rpc error: code = NotFound desc = could not find container \"51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05\": container with ID starting with 51504129ff16f7b00c69e37638a05be393a9f3ede769789341ec1d2f62af8a05 not found: ID does not exist" Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.981827 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589d879d-8s9t5"] Mar 13 20:32:28 crc kubenswrapper[5029]: I0313 20:32:28.996835 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9b774699f-fc5rs"] Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.001196 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9b774699f-fc5rs"] Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.026393 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-config\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.026460 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-client-ca\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.026494 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g262x\" (UniqueName: \"kubernetes.io/projected/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-kube-api-access-g262x\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.026557 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-serving-cert\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.027415 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-client-ca\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.027613 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-config\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.030440 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-serving-cert\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.044692 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g262x\" (UniqueName: \"kubernetes.io/projected/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-kube-api-access-g262x\") pod \"route-controller-manager-6fb5b74fcb-tg57b\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.158377 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.540745 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b"] Mar 13 20:32:29 crc kubenswrapper[5029]: W0313 20:32:29.552392 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9542ffe3_ce7f_4ff3_a22e_6d0fffbc34ce.slice/crio-4478354c1d525d50e53fc78a8dccfb940f663fd69ac026638a7c8b1a5a5d149f WatchSource:0}: Error finding container 4478354c1d525d50e53fc78a8dccfb940f663fd69ac026638a7c8b1a5a5d149f: Status 404 returned error can't find the container with id 4478354c1d525d50e53fc78a8dccfb940f663fd69ac026638a7c8b1a5a5d149f Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.923910 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jkhw" event={"ID":"9d4a1347-08c4-42b0-9fb6-268fdc83147f","Type":"ContainerStarted","Data":"ca1ee83c839bcf07433b909552ab6c7228f0819db2440a6bb4b0c6211b2b405a"} Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.925992 5029 generic.go:334] "Generic (PLEG): container finished" podID="9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d" containerID="6f03dc2e7a7ff9634559dade79a1341b394c88eea7ed16a2dfdaf5f5785d5647" exitCode=0 Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.926053 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" event={"ID":"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d","Type":"ContainerDied","Data":"6f03dc2e7a7ff9634559dade79a1341b394c88eea7ed16a2dfdaf5f5785d5647"} Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.930812 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" event={"ID":"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce","Type":"ContainerStarted","Data":"1986c0190d27bbced0edde57fdc38f57fbbf491d9751391a42b53a128d6cf8cb"} Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.930839 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" event={"ID":"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce","Type":"ContainerStarted","Data":"4478354c1d525d50e53fc78a8dccfb940f663fd69ac026638a7c8b1a5a5d149f"} Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.945037 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5jkhw" podStartSLOduration=5.46289853 podStartE2EDuration="1m19.945012203s" podCreationTimestamp="2026-03-13 20:31:10 +0000 UTC" firstStartedPulling="2026-03-13 20:31:14.935104547 +0000 UTC m=+234.951186950" lastFinishedPulling="2026-03-13 20:32:29.41721822 +0000 UTC m=+309.433300623" observedRunningTime="2026-03-13 20:32:29.942228968 +0000 UTC m=+309.958311381" watchObservedRunningTime="2026-03-13 20:32:29.945012203 +0000 UTC m=+309.961094606" Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.972541 5029 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-16 08:13:58.166825144 +0000 UTC Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.972591 5029 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6659h41m28.194237859s for next certificate rotation Mar 13 20:32:29 crc kubenswrapper[5029]: I0313 20:32:29.978162 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" podStartSLOduration=38.978142124 podStartE2EDuration="38.978142124s" podCreationTimestamp="2026-03-13 20:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:32:29.974212468 +0000 UTC m=+309.990294871" watchObservedRunningTime="2026-03-13 20:32:29.978142124 +0000 UTC m=+309.994224547" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.208758 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.350751 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3512168b-da6f-49b8-8f87-501a62256fba-kube-api-access\") pod \"3512168b-da6f-49b8-8f87-501a62256fba\" (UID: \"3512168b-da6f-49b8-8f87-501a62256fba\") " Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.350796 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3512168b-da6f-49b8-8f87-501a62256fba-kubelet-dir\") pod \"3512168b-da6f-49b8-8f87-501a62256fba\" (UID: \"3512168b-da6f-49b8-8f87-501a62256fba\") " Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.351085 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3512168b-da6f-49b8-8f87-501a62256fba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3512168b-da6f-49b8-8f87-501a62256fba" (UID: "3512168b-da6f-49b8-8f87-501a62256fba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.357143 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3512168b-da6f-49b8-8f87-501a62256fba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3512168b-da6f-49b8-8f87-501a62256fba" (UID: "3512168b-da6f-49b8-8f87-501a62256fba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.452715 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3512168b-da6f-49b8-8f87-501a62256fba-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.452748 5029 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3512168b-da6f-49b8-8f87-501a62256fba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.607015 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a54eb1-12b9-4aeb-92b2-102259c87db2" path="/var/lib/kubelet/pods/10a54eb1-12b9-4aeb-92b2-102259c87db2/volumes" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.607793 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99b29f44-9606-4aaf-b2ec-5f92286ae70b" path="/var/lib/kubelet/pods/99b29f44-9606-4aaf-b2ec-5f92286ae70b/volumes" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.937085 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3512168b-da6f-49b8-8f87-501a62256fba","Type":"ContainerDied","Data":"0f1d7d83daec53f1cdd1726d889e64688786d7821e47b8a645ba54b242c9e9cc"} Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.937381 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f1d7d83daec53f1cdd1726d889e64688786d7821e47b8a645ba54b242c9e9cc" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.937123 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.941344 5029 generic.go:334] "Generic (PLEG): container finished" podID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerID="dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba" exitCode=0 Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.941388 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz4wv" event={"ID":"e2f9d5d5-9771-4294-961f-110aa2430e29","Type":"ContainerDied","Data":"dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba"} Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.941757 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.946926 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.972987 5029 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-29 10:11:19.520622921 +0000 UTC Mar 13 20:32:30 crc kubenswrapper[5029]: I0313 20:32:30.973020 5029 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6253h38m48.547606008s for next certificate rotation Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.182689 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.262996 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn6mm\" (UniqueName: \"kubernetes.io/projected/9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d-kube-api-access-rn6mm\") pod \"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d\" (UID: \"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d\") " Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.269104 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d-kube-api-access-rn6mm" (OuterVolumeSpecName: "kube-api-access-rn6mm") pod "9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d" (UID: "9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d"). InnerVolumeSpecName "kube-api-access-rn6mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.364117 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn6mm\" (UniqueName: \"kubernetes.io/projected/9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d-kube-api-access-rn6mm\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.410282 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.410347 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.740225 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d58595869-5lbzz"] Mar 13 20:32:31 crc kubenswrapper[5029]: E0313 20:32:31.741124 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d" containerName="oc" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.741216 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d" containerName="oc" Mar 13 20:32:31 crc kubenswrapper[5029]: E0313 20:32:31.741298 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3512168b-da6f-49b8-8f87-501a62256fba" containerName="pruner" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.741371 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="3512168b-da6f-49b8-8f87-501a62256fba" containerName="pruner" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.741597 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d" containerName="oc" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.741684 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="3512168b-da6f-49b8-8f87-501a62256fba" containerName="pruner" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.742226 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.745522 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.749341 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.749466 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.749779 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.749831 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.750197 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.751272 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d58595869-5lbzz"] Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.754910 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.870733 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkjbv\" (UniqueName: \"kubernetes.io/projected/888c9632-2c69-4e11-a3bd-0c3efdbb3672-kube-api-access-wkjbv\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.870908 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-proxy-ca-bundles\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.870966 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-config\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.871114 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-client-ca\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.871145 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/888c9632-2c69-4e11-a3bd-0c3efdbb3672-serving-cert\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.947614 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.947967 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557232-9pmbr" event={"ID":"9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d","Type":"ContainerDied","Data":"9e6b83bf4221f31bf74442320b6e6ebbab005b8568147f5a0a58909e2e4a56b6"} Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.948004 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e6b83bf4221f31bf74442320b6e6ebbab005b8568147f5a0a58909e2e4a56b6" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.949781 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.949823 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.949885 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.950254 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.950309 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5" gracePeriod=600 Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.951597 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz4wv" event={"ID":"e2f9d5d5-9771-4294-961f-110aa2430e29","Type":"ContainerStarted","Data":"f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5"} Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.974573 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkjbv\" (UniqueName: \"kubernetes.io/projected/888c9632-2c69-4e11-a3bd-0c3efdbb3672-kube-api-access-wkjbv\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.974642 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-proxy-ca-bundles\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.974676 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-config\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.974700 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-client-ca\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.974717 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/888c9632-2c69-4e11-a3bd-0c3efdbb3672-serving-cert\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.976093 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-client-ca\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.976332 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qz4wv" podStartSLOduration=5.248932451 podStartE2EDuration="1m21.976314669s" podCreationTimestamp="2026-03-13 20:31:10 +0000 UTC" firstStartedPulling="2026-03-13 20:31:14.785351887 +0000 UTC m=+234.801434290" lastFinishedPulling="2026-03-13 20:32:31.512734105 +0000 UTC m=+311.528816508" observedRunningTime="2026-03-13 20:32:31.973415471 +0000 UTC m=+311.989497884" watchObservedRunningTime="2026-03-13 20:32:31.976314669 +0000 UTC m=+311.992397072" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.981492 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/888c9632-2c69-4e11-a3bd-0c3efdbb3672-serving-cert\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.988211 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-config\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.989400 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-proxy-ca-bundles\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:31 crc kubenswrapper[5029]: I0313 20:32:31.991868 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkjbv\" (UniqueName: \"kubernetes.io/projected/888c9632-2c69-4e11-a3bd-0c3efdbb3672-kube-api-access-wkjbv\") pod \"controller-manager-5d58595869-5lbzz\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.066987 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.273290 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d58595869-5lbzz"] Mar 13 20:32:32 crc kubenswrapper[5029]: W0313 20:32:32.282499 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod888c9632_2c69_4e11_a3bd_0c3efdbb3672.slice/crio-6bcdbd09b1105a0f0ef29783b386393dad95463d7f224897b3ed96ede7677d56 WatchSource:0}: Error finding container 6bcdbd09b1105a0f0ef29783b386393dad95463d7f224897b3ed96ede7677d56: Status 404 returned error can't find the container with id 6bcdbd09b1105a0f0ef29783b386393dad95463d7f224897b3ed96ede7677d56 Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.739877 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5jkhw" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="registry-server" probeResult="failure" output=< Mar 13 20:32:32 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 20:32:32 crc kubenswrapper[5029]: > Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.958053 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" event={"ID":"888c9632-2c69-4e11-a3bd-0c3efdbb3672","Type":"ContainerStarted","Data":"f14ca39c01d3b841228cee2c199259704d2a813dab20d51c16f55064c159c49b"} Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.958399 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.958418 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" event={"ID":"888c9632-2c69-4e11-a3bd-0c3efdbb3672","Type":"ContainerStarted","Data":"6bcdbd09b1105a0f0ef29783b386393dad95463d7f224897b3ed96ede7677d56"} Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.960129 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5" exitCode=0 Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.960178 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5"} Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.960201 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"120ec79d685d8e39b184565b1c63047076832380141fec1a83b868fe6ea8eef7"} Mar 13 20:32:32 crc kubenswrapper[5029]: I0313 20:32:32.965441 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:33 crc kubenswrapper[5029]: I0313 20:32:33.002140 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" podStartSLOduration=42.002116321 podStartE2EDuration="42.002116321s" podCreationTimestamp="2026-03-13 20:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:32:32.982634127 +0000 UTC m=+312.998716530" watchObservedRunningTime="2026-03-13 20:32:33.002116321 +0000 UTC m=+313.018198744" Mar 13 20:32:37 crc kubenswrapper[5029]: I0313 20:32:37.002758 5029 generic.go:334] "Generic (PLEG): container finished" podID="5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd" containerID="df388d9e2c2ed3e1864e910668aa372eb61b5adaf0d6cbc0d5b4c63258cd8343" exitCode=0 Mar 13 20:32:37 crc kubenswrapper[5029]: I0313 20:32:37.002941 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557230-trnjq" event={"ID":"5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd","Type":"ContainerDied","Data":"df388d9e2c2ed3e1864e910668aa372eb61b5adaf0d6cbc0d5b4c63258cd8343"} Mar 13 20:32:38 crc kubenswrapper[5029]: I0313 20:32:38.377839 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557230-trnjq" Mar 13 20:32:38 crc kubenswrapper[5029]: I0313 20:32:38.466054 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh25b\" (UniqueName: \"kubernetes.io/projected/5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd-kube-api-access-jh25b\") pod \"5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd\" (UID: \"5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd\") " Mar 13 20:32:38 crc kubenswrapper[5029]: I0313 20:32:38.473713 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd-kube-api-access-jh25b" (OuterVolumeSpecName: "kube-api-access-jh25b") pod "5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd" (UID: "5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd"). InnerVolumeSpecName "kube-api-access-jh25b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:32:38 crc kubenswrapper[5029]: I0313 20:32:38.568794 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh25b\" (UniqueName: \"kubernetes.io/projected/5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd-kube-api-access-jh25b\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:39 crc kubenswrapper[5029]: I0313 20:32:39.018822 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557230-trnjq" event={"ID":"5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd","Type":"ContainerDied","Data":"1d482eb449acacdfed0b93c85904f3981ff912b09dadb970f18960f796049b8b"} Mar 13 20:32:39 crc kubenswrapper[5029]: I0313 20:32:39.018869 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557230-trnjq" Mar 13 20:32:39 crc kubenswrapper[5029]: I0313 20:32:39.018882 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d482eb449acacdfed0b93c85904f3981ff912b09dadb970f18960f796049b8b" Mar 13 20:32:40 crc kubenswrapper[5029]: I0313 20:32:40.958460 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:32:40 crc kubenswrapper[5029]: I0313 20:32:40.958523 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:32:41 crc kubenswrapper[5029]: I0313 20:32:41.213982 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:32:41 crc kubenswrapper[5029]: I0313 20:32:41.451443 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:32:41 crc kubenswrapper[5029]: I0313 20:32:41.489562 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:32:42 crc kubenswrapper[5029]: I0313 20:32:42.031237 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jkhw"] Mar 13 20:32:42 crc kubenswrapper[5029]: I0313 20:32:42.074776 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:32:43 crc kubenswrapper[5029]: I0313 20:32:43.043105 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5jkhw" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="registry-server" containerID="cri-o://ca1ee83c839bcf07433b909552ab6c7228f0819db2440a6bb4b0c6211b2b405a" gracePeriod=2 Mar 13 20:32:45 crc kubenswrapper[5029]: I0313 20:32:45.055112 5029 generic.go:334] "Generic (PLEG): container finished" podID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerID="ca1ee83c839bcf07433b909552ab6c7228f0819db2440a6bb4b0c6211b2b405a" exitCode=0 Mar 13 20:32:45 crc kubenswrapper[5029]: I0313 20:32:45.055161 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jkhw" event={"ID":"9d4a1347-08c4-42b0-9fb6-268fdc83147f","Type":"ContainerDied","Data":"ca1ee83c839bcf07433b909552ab6c7228f0819db2440a6bb4b0c6211b2b405a"} Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.079147 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jkhw" event={"ID":"9d4a1347-08c4-42b0-9fb6-268fdc83147f","Type":"ContainerDied","Data":"9f3a6991cd8150dc45662b600a848095550bb2f60d7b87f52ee72eb0b3cde4b8"} Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.080012 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f3a6991cd8150dc45662b600a848095550bb2f60d7b87f52ee72eb0b3cde4b8" Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.091867 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.267572 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-catalog-content\") pod \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.267638 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-utilities\") pod \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.267756 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgj84\" (UniqueName: \"kubernetes.io/projected/9d4a1347-08c4-42b0-9fb6-268fdc83147f-kube-api-access-bgj84\") pod \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\" (UID: \"9d4a1347-08c4-42b0-9fb6-268fdc83147f\") " Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.269022 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-utilities" (OuterVolumeSpecName: "utilities") pod "9d4a1347-08c4-42b0-9fb6-268fdc83147f" (UID: "9d4a1347-08c4-42b0-9fb6-268fdc83147f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.273043 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4a1347-08c4-42b0-9fb6-268fdc83147f-kube-api-access-bgj84" (OuterVolumeSpecName: "kube-api-access-bgj84") pod "9d4a1347-08c4-42b0-9fb6-268fdc83147f" (UID: "9d4a1347-08c4-42b0-9fb6-268fdc83147f"). InnerVolumeSpecName "kube-api-access-bgj84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.326384 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d4a1347-08c4-42b0-9fb6-268fdc83147f" (UID: "9d4a1347-08c4-42b0-9fb6-268fdc83147f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.369069 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.369112 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4a1347-08c4-42b0-9fb6-268fdc83147f-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:49 crc kubenswrapper[5029]: I0313 20:32:49.369124 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgj84\" (UniqueName: \"kubernetes.io/projected/9d4a1347-08c4-42b0-9fb6-268fdc83147f-kube-api-access-bgj84\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:50 crc kubenswrapper[5029]: I0313 20:32:50.089791 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jkhw" Mar 13 20:32:50 crc kubenswrapper[5029]: I0313 20:32:50.123942 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jkhw"] Mar 13 20:32:50 crc kubenswrapper[5029]: I0313 20:32:50.124002 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5jkhw"] Mar 13 20:32:50 crc kubenswrapper[5029]: I0313 20:32:50.605664 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" path="/var/lib/kubelet/pods/9d4a1347-08c4-42b0-9fb6-268fdc83147f/volumes" Mar 13 20:32:51 crc kubenswrapper[5029]: I0313 20:32:51.784880 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d58595869-5lbzz"] Mar 13 20:32:51 crc kubenswrapper[5029]: I0313 20:32:51.785438 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" podUID="888c9632-2c69-4e11-a3bd-0c3efdbb3672" containerName="controller-manager" containerID="cri-o://f14ca39c01d3b841228cee2c199259704d2a813dab20d51c16f55064c159c49b" gracePeriod=30 Mar 13 20:32:51 crc kubenswrapper[5029]: I0313 20:32:51.882834 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b"] Mar 13 20:32:51 crc kubenswrapper[5029]: I0313 20:32:51.883101 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" podUID="9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" containerName="route-controller-manager" containerID="cri-o://1986c0190d27bbced0edde57fdc38f57fbbf491d9751391a42b53a128d6cf8cb" gracePeriod=30 Mar 13 20:32:52 crc kubenswrapper[5029]: I0313 20:32:52.070413 5029 patch_prober.go:28] interesting pod/controller-manager-5d58595869-5lbzz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Mar 13 20:32:52 crc kubenswrapper[5029]: I0313 20:32:52.070473 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" podUID="888c9632-2c69-4e11-a3bd-0c3efdbb3672" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Mar 13 20:32:53 crc kubenswrapper[5029]: I0313 20:32:53.417344 5029 generic.go:334] "Generic (PLEG): container finished" podID="9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" containerID="1986c0190d27bbced0edde57fdc38f57fbbf491d9751391a42b53a128d6cf8cb" exitCode=0 Mar 13 20:32:53 crc kubenswrapper[5029]: I0313 20:32:53.417438 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" event={"ID":"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce","Type":"ContainerDied","Data":"1986c0190d27bbced0edde57fdc38f57fbbf491d9751391a42b53a128d6cf8cb"} Mar 13 20:32:53 crc kubenswrapper[5029]: I0313 20:32:53.424654 5029 generic.go:334] "Generic (PLEG): container finished" podID="888c9632-2c69-4e11-a3bd-0c3efdbb3672" containerID="f14ca39c01d3b841228cee2c199259704d2a813dab20d51c16f55064c159c49b" exitCode=0 Mar 13 20:32:53 crc kubenswrapper[5029]: I0313 20:32:53.424695 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" event={"ID":"888c9632-2c69-4e11-a3bd-0c3efdbb3672","Type":"ContainerDied","Data":"f14ca39c01d3b841228cee2c199259704d2a813dab20d51c16f55064c159c49b"} Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.448522 5029 generic.go:334] "Generic (PLEG): container finished" podID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerID="fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471" exitCode=0 Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.448896 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kl2lj" event={"ID":"e33b18fb-9cd7-4c30-bdb0-402734c47cc8","Type":"ContainerDied","Data":"fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471"} Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.456975 5029 generic.go:334] "Generic (PLEG): container finished" podID="553bdc43-797f-401f-9ca0-875060ab0553" containerID="c3d369815cd4841e112c5bc77cd26a96679c471fd2681199105e78449d4a689d" exitCode=0 Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.457019 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xlnz" event={"ID":"553bdc43-797f-401f-9ca0-875060ab0553","Type":"ContainerDied","Data":"c3d369815cd4841e112c5bc77cd26a96679c471fd2681199105e78449d4a689d"} Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.589080 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.593443 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.618661 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j"] Mar 13 20:32:54 crc kubenswrapper[5029]: E0313 20:32:54.618872 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="extract-content" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.618887 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="extract-content" Mar 13 20:32:54 crc kubenswrapper[5029]: E0313 20:32:54.618896 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="registry-server" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.618902 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="registry-server" Mar 13 20:32:54 crc kubenswrapper[5029]: E0313 20:32:54.618913 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" containerName="route-controller-manager" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.618919 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" containerName="route-controller-manager" Mar 13 20:32:54 crc kubenswrapper[5029]: E0313 20:32:54.618931 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd" containerName="oc" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.618937 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd" containerName="oc" Mar 13 20:32:54 crc kubenswrapper[5029]: E0313 20:32:54.618947 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c9632-2c69-4e11-a3bd-0c3efdbb3672" containerName="controller-manager" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.618953 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c9632-2c69-4e11-a3bd-0c3efdbb3672" containerName="controller-manager" Mar 13 20:32:54 crc kubenswrapper[5029]: E0313 20:32:54.618960 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="extract-utilities" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.618968 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="extract-utilities" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.619060 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4a1347-08c4-42b0-9fb6-268fdc83147f" containerName="registry-server" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.619073 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd" containerName="oc" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.619082 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" containerName="route-controller-manager" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.619090 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="888c9632-2c69-4e11-a3bd-0c3efdbb3672" containerName="controller-manager" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.619421 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.632764 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j"] Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639489 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-proxy-ca-bundles\") pod \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639558 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-client-ca\") pod \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639586 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-client-ca\") pod \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639641 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-config\") pod \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639685 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g262x\" (UniqueName: \"kubernetes.io/projected/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-kube-api-access-g262x\") pod \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639704 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkjbv\" (UniqueName: \"kubernetes.io/projected/888c9632-2c69-4e11-a3bd-0c3efdbb3672-kube-api-access-wkjbv\") pod \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639722 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/888c9632-2c69-4e11-a3bd-0c3efdbb3672-serving-cert\") pod \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\" (UID: \"888c9632-2c69-4e11-a3bd-0c3efdbb3672\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639762 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-serving-cert\") pod \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.639780 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-config\") pod \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\" (UID: \"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce\") " Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.640592 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" (UID: "9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.640871 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-config" (OuterVolumeSpecName: "config") pod "9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" (UID: "9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.641349 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-client-ca" (OuterVolumeSpecName: "client-ca") pod "888c9632-2c69-4e11-a3bd-0c3efdbb3672" (UID: "888c9632-2c69-4e11-a3bd-0c3efdbb3672"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.641514 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "888c9632-2c69-4e11-a3bd-0c3efdbb3672" (UID: "888c9632-2c69-4e11-a3bd-0c3efdbb3672"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.644121 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-config" (OuterVolumeSpecName: "config") pod "888c9632-2c69-4e11-a3bd-0c3efdbb3672" (UID: "888c9632-2c69-4e11-a3bd-0c3efdbb3672"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741513 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-config\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741608 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa5037c-7a78-4c74-99c8-27e93342fe37-serving-cert\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741688 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjjrr\" (UniqueName: \"kubernetes.io/projected/9aa5037c-7a78-4c74-99c8-27e93342fe37-kube-api-access-gjjrr\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741711 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-client-ca\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741756 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741768 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741778 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741788 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.741798 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/888c9632-2c69-4e11-a3bd-0c3efdbb3672-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.843428 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa5037c-7a78-4c74-99c8-27e93342fe37-serving-cert\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.843569 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjjrr\" (UniqueName: \"kubernetes.io/projected/9aa5037c-7a78-4c74-99c8-27e93342fe37-kube-api-access-gjjrr\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.843597 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-client-ca\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.844957 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-config\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.844866 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-client-ca\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.846025 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-config\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.846948 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa5037c-7a78-4c74-99c8-27e93342fe37-serving-cert\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.866231 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjjrr\" (UniqueName: \"kubernetes.io/projected/9aa5037c-7a78-4c74-99c8-27e93342fe37-kube-api-access-gjjrr\") pod \"route-controller-manager-bc646b7d9-bjh6j\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:54 crc kubenswrapper[5029]: I0313 20:32:54.948360 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.302986 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c9632-2c69-4e11-a3bd-0c3efdbb3672-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "888c9632-2c69-4e11-a3bd-0c3efdbb3672" (UID: "888c9632-2c69-4e11-a3bd-0c3efdbb3672"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.303065 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888c9632-2c69-4e11-a3bd-0c3efdbb3672-kube-api-access-wkjbv" (OuterVolumeSpecName: "kube-api-access-wkjbv") pod "888c9632-2c69-4e11-a3bd-0c3efdbb3672" (UID: "888c9632-2c69-4e11-a3bd-0c3efdbb3672"). InnerVolumeSpecName "kube-api-access-wkjbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.303232 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-kube-api-access-g262x" (OuterVolumeSpecName: "kube-api-access-g262x") pod "9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" (UID: "9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce"). InnerVolumeSpecName "kube-api-access-g262x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.308439 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" (UID: "9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.355116 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g262x\" (UniqueName: \"kubernetes.io/projected/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-kube-api-access-g262x\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.355163 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkjbv\" (UniqueName: \"kubernetes.io/projected/888c9632-2c69-4e11-a3bd-0c3efdbb3672-kube-api-access-wkjbv\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.355174 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/888c9632-2c69-4e11-a3bd-0c3efdbb3672-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.355185 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.452604 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j"] Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.492691 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" event={"ID":"888c9632-2c69-4e11-a3bd-0c3efdbb3672","Type":"ContainerDied","Data":"6bcdbd09b1105a0f0ef29783b386393dad95463d7f224897b3ed96ede7677d56"} Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.493057 5029 scope.go:117] "RemoveContainer" containerID="f14ca39c01d3b841228cee2c199259704d2a813dab20d51c16f55064c159c49b" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.492747 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d58595869-5lbzz" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.497342 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" event={"ID":"9aa5037c-7a78-4c74-99c8-27e93342fe37","Type":"ContainerStarted","Data":"642b2abee1d7ba45a786af03c60f10ffe52d47313440e6338ba53409d0fc58c8"} Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.516770 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" event={"ID":"9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce","Type":"ContainerDied","Data":"4478354c1d525d50e53fc78a8dccfb940f663fd69ac026638a7c8b1a5a5d149f"} Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.516916 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.530299 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s58vt" event={"ID":"151390c1-ebb0-49bf-be99-3326fc839781","Type":"ContainerStarted","Data":"d5c8277dafd0da5519b017399b95f07199f91bbd7178b4a3da16fa3f887d4f41"} Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.533636 5029 scope.go:117] "RemoveContainer" containerID="1986c0190d27bbced0edde57fdc38f57fbbf491d9751391a42b53a128d6cf8cb" Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.543127 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d58595869-5lbzz"] Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.556943 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d58595869-5lbzz"] Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.580431 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b"] Mar 13 20:32:55 crc kubenswrapper[5029]: I0313 20:32:55.590138 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb5b74fcb-tg57b"] Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.548722 5029 generic.go:334] "Generic (PLEG): container finished" podID="151390c1-ebb0-49bf-be99-3326fc839781" containerID="d5c8277dafd0da5519b017399b95f07199f91bbd7178b4a3da16fa3f887d4f41" exitCode=0 Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.548798 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s58vt" event={"ID":"151390c1-ebb0-49bf-be99-3326fc839781","Type":"ContainerDied","Data":"d5c8277dafd0da5519b017399b95f07199f91bbd7178b4a3da16fa3f887d4f41"} Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.562680 5029 generic.go:334] "Generic (PLEG): container finished" podID="866c95e1-566b-4e67-8822-b6c182cb3378" containerID="8fcef2c32b40494bf2fdaa8be6712e5f4df0a931a4d74917a0479683da6c9cf2" exitCode=0 Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.560918 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" event={"ID":"9aa5037c-7a78-4c74-99c8-27e93342fe37","Type":"ContainerStarted","Data":"91df90316164973f84d0c0bec1d9b74568b64d0a2b8bfda8990c1bd7054cc4c9"} Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.563452 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhg5r" event={"ID":"866c95e1-566b-4e67-8822-b6c182cb3378","Type":"ContainerDied","Data":"8fcef2c32b40494bf2fdaa8be6712e5f4df0a931a4d74917a0479683da6c9cf2"} Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.563482 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.566359 5029 generic.go:334] "Generic (PLEG): container finished" podID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerID="78e06fe6c6a0a3994a216ec86b6bc8a85a6111a25313eeafd09bc189f190bd54" exitCode=0 Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.566397 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpzl2" event={"ID":"5760820d-9df0-4f3e-b14f-1c64e2607ecd","Type":"ContainerDied","Data":"78e06fe6c6a0a3994a216ec86b6bc8a85a6111a25313eeafd09bc189f190bd54"} Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.567469 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.593644 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" podStartSLOduration=5.593621042 podStartE2EDuration="5.593621042s" podCreationTimestamp="2026-03-13 20:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:32:56.583956359 +0000 UTC m=+336.600038752" watchObservedRunningTime="2026-03-13 20:32:56.593621042 +0000 UTC m=+336.609703455" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.607815 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888c9632-2c69-4e11-a3bd-0c3efdbb3672" path="/var/lib/kubelet/pods/888c9632-2c69-4e11-a3bd-0c3efdbb3672/volumes" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.608503 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce" path="/var/lib/kubelet/pods/9542ffe3-ce7f-4ff3-a22e-6d0fffbc34ce/volumes" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.758981 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b86d6b979-htnb9"] Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.761563 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.764376 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.764877 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.765191 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.765492 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.766011 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.766437 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.768828 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b86d6b979-htnb9"] Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.773363 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.877455 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmd6q\" (UniqueName: \"kubernetes.io/projected/8a94b9df-557f-42b1-9421-46b17536bafc-kube-api-access-nmd6q\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.877542 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-client-ca\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.877714 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a94b9df-557f-42b1-9421-46b17536bafc-serving-cert\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.877799 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-proxy-ca-bundles\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.877830 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-config\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.979309 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmd6q\" (UniqueName: \"kubernetes.io/projected/8a94b9df-557f-42b1-9421-46b17536bafc-kube-api-access-nmd6q\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.979353 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-client-ca\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.979406 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a94b9df-557f-42b1-9421-46b17536bafc-serving-cert\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.979446 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-proxy-ca-bundles\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.979506 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-config\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.980576 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-client-ca\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.981056 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-config\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.982627 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-proxy-ca-bundles\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.993545 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a94b9df-557f-42b1-9421-46b17536bafc-serving-cert\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:56 crc kubenswrapper[5029]: I0313 20:32:56.995585 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmd6q\" (UniqueName: \"kubernetes.io/projected/8a94b9df-557f-42b1-9421-46b17536bafc-kube-api-access-nmd6q\") pod \"controller-manager-7b86d6b979-htnb9\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.103640 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.295757 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b86d6b979-htnb9"] Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.576734 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhg5r" event={"ID":"866c95e1-566b-4e67-8822-b6c182cb3378","Type":"ContainerStarted","Data":"cbf2d14681501515c511963c2ec733c6ad3f6bc3a8be17580ae7738bca956844"} Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.579161 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpzl2" event={"ID":"5760820d-9df0-4f3e-b14f-1c64e2607ecd","Type":"ContainerStarted","Data":"ea431d9d073770b052681e5acfa214dc6ca5c51dc6e4ecff60dfab60fd2f9387"} Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.581535 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xlnz" event={"ID":"553bdc43-797f-401f-9ca0-875060ab0553","Type":"ContainerStarted","Data":"29f97a6ac0965e8116f60cf3e39b0f1d0c9e462aaf94fce947d5c7877b0ead80"} Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.583590 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" event={"ID":"8a94b9df-557f-42b1-9421-46b17536bafc","Type":"ContainerStarted","Data":"0fcec0839b30e5d78e4ecabcfc5cdf213599a73cc0447991830fe63932dac556"} Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.583618 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" event={"ID":"8a94b9df-557f-42b1-9421-46b17536bafc","Type":"ContainerStarted","Data":"f44462996df94b37a0f81c65a558af61c66d188224d95aca520557607d01447a"} Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.583995 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.584970 5029 patch_prober.go:28] interesting pod/controller-manager-7b86d6b979-htnb9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.585004 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" podUID="8a94b9df-557f-42b1-9421-46b17536bafc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.587000 5029 generic.go:334] "Generic (PLEG): container finished" podID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerID="c20eec6fc26eb49f3dd544a9135c08ed7e30e303e843e374256b157462d81f7d" exitCode=0 Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.587054 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-494x8" event={"ID":"3c8fadb2-962e-4bca-8305-a51b8d2334bb","Type":"ContainerDied","Data":"c20eec6fc26eb49f3dd544a9135c08ed7e30e303e843e374256b157462d81f7d"} Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.589728 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s58vt" event={"ID":"151390c1-ebb0-49bf-be99-3326fc839781","Type":"ContainerStarted","Data":"8cc01f9bedca0104f539695ae721b0dc61aab86685a7c3bfe37ba10fdfd2ee5c"} Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.598695 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dhg5r" podStartSLOduration=3.13905357 podStartE2EDuration="1m44.598677514s" podCreationTimestamp="2026-03-13 20:31:13 +0000 UTC" firstStartedPulling="2026-03-13 20:31:15.996186697 +0000 UTC m=+236.012269100" lastFinishedPulling="2026-03-13 20:32:57.455810641 +0000 UTC m=+337.471893044" observedRunningTime="2026-03-13 20:32:57.597728309 +0000 UTC m=+337.613810732" watchObservedRunningTime="2026-03-13 20:32:57.598677514 +0000 UTC m=+337.614759927" Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.622292 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s58vt" podStartSLOduration=2.571155269 podStartE2EDuration="1m43.622264578s" podCreationTimestamp="2026-03-13 20:31:14 +0000 UTC" firstStartedPulling="2026-03-13 20:31:16.136914324 +0000 UTC m=+236.152996727" lastFinishedPulling="2026-03-13 20:32:57.188023633 +0000 UTC m=+337.204106036" observedRunningTime="2026-03-13 20:32:57.617421725 +0000 UTC m=+337.633504128" watchObservedRunningTime="2026-03-13 20:32:57.622264578 +0000 UTC m=+337.638346981" Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.659595 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" podStartSLOduration=6.659567734 podStartE2EDuration="6.659567734s" podCreationTimestamp="2026-03-13 20:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:32:57.658051262 +0000 UTC m=+337.674133665" watchObservedRunningTime="2026-03-13 20:32:57.659567734 +0000 UTC m=+337.675650147" Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.683835 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2xlnz" podStartSLOduration=4.868686165 podStartE2EDuration="1m45.683794155s" podCreationTimestamp="2026-03-13 20:31:12 +0000 UTC" firstStartedPulling="2026-03-13 20:31:16.119146456 +0000 UTC m=+236.135228859" lastFinishedPulling="2026-03-13 20:32:56.934254446 +0000 UTC m=+336.950336849" observedRunningTime="2026-03-13 20:32:57.678250514 +0000 UTC m=+337.694332937" watchObservedRunningTime="2026-03-13 20:32:57.683794155 +0000 UTC m=+337.699876568" Mar 13 20:32:57 crc kubenswrapper[5029]: I0313 20:32:57.709502 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vpzl2" podStartSLOduration=3.429110061 podStartE2EDuration="1m44.709474554s" podCreationTimestamp="2026-03-13 20:31:13 +0000 UTC" firstStartedPulling="2026-03-13 20:31:16.067530907 +0000 UTC m=+236.083613310" lastFinishedPulling="2026-03-13 20:32:57.34789539 +0000 UTC m=+337.363977803" observedRunningTime="2026-03-13 20:32:57.705908046 +0000 UTC m=+337.721990479" watchObservedRunningTime="2026-03-13 20:32:57.709474554 +0000 UTC m=+337.725556957" Mar 13 20:32:58 crc kubenswrapper[5029]: I0313 20:32:58.609434 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:32:58 crc kubenswrapper[5029]: I0313 20:32:58.609931 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kl2lj" event={"ID":"e33b18fb-9cd7-4c30-bdb0-402734c47cc8","Type":"ContainerStarted","Data":"8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96"} Mar 13 20:32:58 crc kubenswrapper[5029]: I0313 20:32:58.609956 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-494x8" event={"ID":"3c8fadb2-962e-4bca-8305-a51b8d2334bb","Type":"ContainerStarted","Data":"d1842e0b2d093158ee852e0b3bc2ec06d11a44ae408f34bb65470916033ed1e4"} Mar 13 20:32:58 crc kubenswrapper[5029]: I0313 20:32:58.656754 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kl2lj" podStartSLOduration=5.099887134 podStartE2EDuration="1m48.65672974s" podCreationTimestamp="2026-03-13 20:31:10 +0000 UTC" firstStartedPulling="2026-03-13 20:31:14.195413784 +0000 UTC m=+234.211496187" lastFinishedPulling="2026-03-13 20:32:57.75225639 +0000 UTC m=+337.768338793" observedRunningTime="2026-03-13 20:32:58.651404185 +0000 UTC m=+338.667486598" watchObservedRunningTime="2026-03-13 20:32:58.65672974 +0000 UTC m=+338.672812143" Mar 13 20:32:58 crc kubenswrapper[5029]: I0313 20:32:58.675160 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-494x8" podStartSLOduration=4.015480769 podStartE2EDuration="1m47.675134502s" podCreationTimestamp="2026-03-13 20:31:11 +0000 UTC" firstStartedPulling="2026-03-13 20:31:14.410266454 +0000 UTC m=+234.426348847" lastFinishedPulling="2026-03-13 20:32:58.069920177 +0000 UTC m=+338.086002580" observedRunningTime="2026-03-13 20:32:58.670552887 +0000 UTC m=+338.686635290" watchObservedRunningTime="2026-03-13 20:32:58.675134502 +0000 UTC m=+338.691216905" Mar 13 20:33:01 crc kubenswrapper[5029]: I0313 20:33:01.103816 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:33:01 crc kubenswrapper[5029]: I0313 20:33:01.104111 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:33:01 crc kubenswrapper[5029]: I0313 20:33:01.142650 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:33:01 crc kubenswrapper[5029]: I0313 20:33:01.506162 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-494x8" Mar 13 20:33:01 crc kubenswrapper[5029]: I0313 20:33:01.506217 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-494x8" Mar 13 20:33:01 crc kubenswrapper[5029]: I0313 20:33:01.552297 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-494x8" Mar 13 20:33:03 crc kubenswrapper[5029]: I0313 20:33:03.117798 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:33:03 crc kubenswrapper[5029]: I0313 20:33:03.117884 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:33:03 crc kubenswrapper[5029]: I0313 20:33:03.165252 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:33:03 crc kubenswrapper[5029]: I0313 20:33:03.557502 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:33:03 crc kubenswrapper[5029]: I0313 20:33:03.558614 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:33:03 crc kubenswrapper[5029]: I0313 20:33:03.603446 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:33:03 crc kubenswrapper[5029]: I0313 20:33:03.683086 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:33:03 crc kubenswrapper[5029]: I0313 20:33:03.691654 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.046063 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.046327 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.642889 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.642951 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.676505 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.993961 5029 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.994616 5029 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.994826 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.995003 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82" gracePeriod=15 Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.995118 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0" gracePeriod=15 Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.995215 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350" gracePeriod=15 Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.995282 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029" gracePeriod=15 Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.995041 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5" gracePeriod=15 Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.995943 5029 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996172 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996188 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996202 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996211 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996222 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996231 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996239 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996248 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996261 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996269 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996280 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996287 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996297 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996304 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996313 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996321 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996335 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996342 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: E0313 20:33:04.996355 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996363 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996479 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996498 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996511 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996521 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996530 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996539 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996548 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996559 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:04 crc kubenswrapper[5029]: I0313 20:33:04.996792 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 13 20:33:05 crc kubenswrapper[5029]: E0313 20:33:05.065707 5029 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.083567 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpzl2" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="registry-server" probeResult="failure" output=< Mar 13 20:33:05 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 20:33:05 crc kubenswrapper[5029]: > Mar 13 20:33:05 crc kubenswrapper[5029]: E0313 20:33:05.084417 5029 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event=< Mar 13 20:33:05 crc kubenswrapper[5029]: &Event{ObjectMeta:{redhat-operators-vpzl2.189c80d039841437 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-vpzl2,UID:5760820d-9df0-4f3e-b14f-1c64e2607ecd,APIVersion:v1,ResourceVersion:28530,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Startup probe failed: timeout: failed to connect service ":50051" within 1s Mar 13 20:33:05 crc kubenswrapper[5029]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:33:05.083642935 +0000 UTC m=+345.099725338,LastTimestamp:2026-03-13 20:33:05.083642935 +0000 UTC m=+345.099725338,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 20:33:05 crc kubenswrapper[5029]: > Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.117503 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.118123 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.118151 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.118181 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.118207 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.118232 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.118397 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.118589 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220076 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220151 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220171 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220190 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220214 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220215 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220237 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220268 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220288 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220296 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220320 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220325 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220348 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220348 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220306 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.220399 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.366194 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: W0313 20:33:05.384768 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ca6fc3e34ee366a98b11fbeca7a385002f27edceda9ade5e378a55784a548375 WatchSource:0}: Error finding container ca6fc3e34ee366a98b11fbeca7a385002f27edceda9ade5e378a55784a548375: Status 404 returned error can't find the container with id ca6fc3e34ee366a98b11fbeca7a385002f27edceda9ade5e378a55784a548375 Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.482788 5029 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.482906 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.653069 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.656551 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.657659 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5" exitCode=0 Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.657696 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029" exitCode=0 Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.657705 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0" exitCode=0 Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.657713 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350" exitCode=2 Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.657801 5029 scope.go:117] "RemoveContainer" containerID="fce2883f06367da4f62ef90405402502bebbb76d49da49455537690b2445a73c" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.661092 5029 generic.go:334] "Generic (PLEG): container finished" podID="5457502b-4e3f-463b-87ae-4013109d2298" containerID="f2257a9df9be6d1dbecd60af052897fd368b4587b13b4ee3880b732b37447ded" exitCode=0 Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.661239 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5457502b-4e3f-463b-87ae-4013109d2298","Type":"ContainerDied","Data":"f2257a9df9be6d1dbecd60af052897fd368b4587b13b4ee3880b732b37447ded"} Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.662840 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.663417 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9"} Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.663481 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ca6fc3e34ee366a98b11fbeca7a385002f27edceda9ade5e378a55784a548375"} Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.663352 5029 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.664910 5029 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:05 crc kubenswrapper[5029]: E0313 20:33:05.665129 5029 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.665578 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.703821 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.704625 5029 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.704986 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:05 crc kubenswrapper[5029]: I0313 20:33:05.705427 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.159190 5029 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.160190 5029 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.160532 5029 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.160760 5029 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.161084 5029 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:06 crc kubenswrapper[5029]: I0313 20:33:06.161127 5029 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.161391 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="200ms" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.363397 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="400ms" Mar 13 20:33:06 crc kubenswrapper[5029]: I0313 20:33:06.672217 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.674136 5029 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:06 crc kubenswrapper[5029]: E0313 20:33:06.764686 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="800ms" Mar 13 20:33:06 crc kubenswrapper[5029]: I0313 20:33:06.990554 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:33:06 crc kubenswrapper[5029]: I0313 20:33:06.991114 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:06 crc kubenswrapper[5029]: I0313 20:33:06.991385 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.008205 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:33:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:33:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:33:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T20:33:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:22a26ca7265384c59de9df352ac56df0636cb2473ff4a84ca7dfa03470a7bf8f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2bfa5e9d9d1aa565071157dd88f2d4a8598f42b07ebc94f3d3631937431c823b\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1248394461},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:82d84f910182e47fbe13b1f3721dc3eb1693c843de5068c2ab901ed2062d7b1b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d20f666fd2e2b39827f66f19fa1b8168f44b60833ebb8676a93b20e8f5706088\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221686985},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.008663 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.008951 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.009277 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.009629 5029 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.009668 5029 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.144554 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-kubelet-dir\") pod \"5457502b-4e3f-463b-87ae-4013109d2298\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.144694 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5457502b-4e3f-463b-87ae-4013109d2298" (UID: "5457502b-4e3f-463b-87ae-4013109d2298"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.145047 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-var-lock\") pod \"5457502b-4e3f-463b-87ae-4013109d2298\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.145117 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5457502b-4e3f-463b-87ae-4013109d2298-kube-api-access\") pod \"5457502b-4e3f-463b-87ae-4013109d2298\" (UID: \"5457502b-4e3f-463b-87ae-4013109d2298\") " Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.145157 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-var-lock" (OuterVolumeSpecName: "var-lock") pod "5457502b-4e3f-463b-87ae-4013109d2298" (UID: "5457502b-4e3f-463b-87ae-4013109d2298"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.145395 5029 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-var-lock\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.145417 5029 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5457502b-4e3f-463b-87ae-4013109d2298-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.165056 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5457502b-4e3f-463b-87ae-4013109d2298-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5457502b-4e3f-463b-87ae-4013109d2298" (UID: "5457502b-4e3f-463b-87ae-4013109d2298"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.247147 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5457502b-4e3f-463b-87ae-4013109d2298-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.302208 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.303115 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.303795 5029 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.304278 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.304699 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.448706 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.449230 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.448827 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.449324 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.449366 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.449473 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.449843 5029 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.449921 5029 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.449937 5029 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.566272 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="1.6s" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.680488 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.681458 5029 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82" exitCode=0 Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.681556 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.681595 5029 scope.go:117] "RemoveContainer" containerID="61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.682748 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5457502b-4e3f-463b-87ae-4013109d2298","Type":"ContainerDied","Data":"71fc6799eb91cd8acd52b4e9ea3b5363302e2caa58e161e0b7711bf967f0df1d"} Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.682796 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71fc6799eb91cd8acd52b4e9ea3b5363302e2caa58e161e0b7711bf967f0df1d" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.682834 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.697922 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.698020 5029 scope.go:117] "RemoveContainer" containerID="398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.698453 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.698813 5029 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.699319 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.699602 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.699956 5029 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.711207 5029 scope.go:117] "RemoveContainer" containerID="a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.725692 5029 scope.go:117] "RemoveContainer" containerID="182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.741961 5029 scope.go:117] "RemoveContainer" containerID="1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.756782 5029 scope.go:117] "RemoveContainer" containerID="ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.772428 5029 scope.go:117] "RemoveContainer" containerID="61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.772874 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\": container with ID starting with 61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5 not found: ID does not exist" containerID="61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.772922 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5"} err="failed to get container status \"61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\": rpc error: code = NotFound desc = could not find container \"61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5\": container with ID starting with 61ecaa2dd8149546643e0f793e872cd0782bc0bb860fe8918140feae56bc53c5 not found: ID does not exist" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.772960 5029 scope.go:117] "RemoveContainer" containerID="398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.773418 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\": container with ID starting with 398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029 not found: ID does not exist" containerID="398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.773461 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029"} err="failed to get container status \"398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\": rpc error: code = NotFound desc = could not find container \"398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029\": container with ID starting with 398073d9bb24db2d1c29db01f0a318baa45bb50a4ea523c59e42cac053c67029 not found: ID does not exist" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.773491 5029 scope.go:117] "RemoveContainer" containerID="a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.773806 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\": container with ID starting with a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0 not found: ID does not exist" containerID="a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.773875 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0"} err="failed to get container status \"a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\": rpc error: code = NotFound desc = could not find container \"a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0\": container with ID starting with a8fdcb67e0796f7a3e92f46a3d1e2a60fbe9b29d52261bc844de236b7ad2c4e0 not found: ID does not exist" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.773910 5029 scope.go:117] "RemoveContainer" containerID="182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.774338 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\": container with ID starting with 182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350 not found: ID does not exist" containerID="182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.774366 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350"} err="failed to get container status \"182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\": rpc error: code = NotFound desc = could not find container \"182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350\": container with ID starting with 182cf0decb79595f58d22703590f037c5cd3b352f7a997c83d194d5aeb657350 not found: ID does not exist" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.774383 5029 scope.go:117] "RemoveContainer" containerID="1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.774650 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\": container with ID starting with 1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82 not found: ID does not exist" containerID="1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.774670 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82"} err="failed to get container status \"1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\": rpc error: code = NotFound desc = could not find container \"1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82\": container with ID starting with 1d81692203d6c0661c718514db23dd07610a3823e8a6c1858e1541044102db82 not found: ID does not exist" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.774683 5029 scope.go:117] "RemoveContainer" containerID="ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd" Mar 13 20:33:07 crc kubenswrapper[5029]: E0313 20:33:07.774950 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\": container with ID starting with ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd not found: ID does not exist" containerID="ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd" Mar 13 20:33:07 crc kubenswrapper[5029]: I0313 20:33:07.774976 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd"} err="failed to get container status \"ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\": rpc error: code = NotFound desc = could not find container \"ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd\": container with ID starting with ed24f3beec93897717c9a77af48f3a61b47235affcfb9c27a5934dd6f14bbebd not found: ID does not exist" Mar 13 20:33:08 crc kubenswrapper[5029]: I0313 20:33:08.607417 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 13 20:33:09 crc kubenswrapper[5029]: E0313 20:33:09.167987 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="3.2s" Mar 13 20:33:10 crc kubenswrapper[5029]: I0313 20:33:10.602466 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:10 crc kubenswrapper[5029]: I0313 20:33:10.603444 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.160994 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.161470 5029 status_manager.go:851] "Failed to get status for pod" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" pod="openshift-marketplace/community-operators-kl2lj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kl2lj\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.161686 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.161943 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.542314 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-494x8" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.543044 5029 status_manager.go:851] "Failed to get status for pod" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" pod="openshift-marketplace/community-operators-kl2lj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kl2lj\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.543505 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.543839 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:11 crc kubenswrapper[5029]: I0313 20:33:11.544786 5029 status_manager.go:851] "Failed to get status for pod" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" pod="openshift-marketplace/community-operators-494x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-494x8\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:12 crc kubenswrapper[5029]: E0313 20:33:12.369925 5029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="6.4s" Mar 13 20:33:12 crc kubenswrapper[5029]: E0313 20:33:12.735754 5029 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event=< Mar 13 20:33:12 crc kubenswrapper[5029]: &Event{ObjectMeta:{redhat-operators-vpzl2.189c80d039841437 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-vpzl2,UID:5760820d-9df0-4f3e-b14f-1c64e2607ecd,APIVersion:v1,ResourceVersion:28530,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Startup probe failed: timeout: failed to connect service ":50051" within 1s Mar 13 20:33:12 crc kubenswrapper[5029]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-13 20:33:05.083642935 +0000 UTC m=+345.099725338,LastTimestamp:2026-03-13 20:33:05.083642935 +0000 UTC m=+345.099725338,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 13 20:33:12 crc kubenswrapper[5029]: > Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.090637 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.091741 5029 status_manager.go:851] "Failed to get status for pod" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" pod="openshift-marketplace/redhat-operators-vpzl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vpzl2\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.092238 5029 status_manager.go:851] "Failed to get status for pod" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" pod="openshift-marketplace/community-operators-kl2lj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kl2lj\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.092732 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.097775 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.098975 5029 status_manager.go:851] "Failed to get status for pod" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" pod="openshift-marketplace/community-operators-494x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-494x8\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.133885 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.134584 5029 status_manager.go:851] "Failed to get status for pod" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" pod="openshift-marketplace/redhat-operators-vpzl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vpzl2\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.135214 5029 status_manager.go:851] "Failed to get status for pod" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" pod="openshift-marketplace/community-operators-kl2lj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kl2lj\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.135768 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.136290 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:14 crc kubenswrapper[5029]: I0313 20:33:14.136772 5029 status_manager.go:851] "Failed to get status for pod" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" pod="openshift-marketplace/community-operators-494x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-494x8\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.598565 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.599558 5029 status_manager.go:851] "Failed to get status for pod" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" pod="openshift-marketplace/community-operators-494x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-494x8\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.600251 5029 status_manager.go:851] "Failed to get status for pod" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" pod="openshift-marketplace/redhat-operators-vpzl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vpzl2\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.600623 5029 status_manager.go:851] "Failed to get status for pod" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" pod="openshift-marketplace/community-operators-kl2lj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kl2lj\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.601195 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.601450 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.627053 5029 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.627092 5029 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:15 crc kubenswrapper[5029]: E0313 20:33:15.627455 5029 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.628207 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:15 crc kubenswrapper[5029]: W0313 20:33:15.667216 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d2bea0b9161fa1005e531b96ee4d569a5d88d61636c6bedabf471872c417f918 WatchSource:0}: Error finding container d2bea0b9161fa1005e531b96ee4d569a5d88d61636c6bedabf471872c417f918: Status 404 returned error can't find the container with id d2bea0b9161fa1005e531b96ee4d569a5d88d61636c6bedabf471872c417f918 Mar 13 20:33:15 crc kubenswrapper[5029]: I0313 20:33:15.726174 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d2bea0b9161fa1005e531b96ee4d569a5d88d61636c6bedabf471872c417f918"} Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.742674 5029 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7808f05dbd2e43d5e17eb420dfbb925bb1524d6f37b372861047b8174153b1c0" exitCode=0 Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.742729 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7808f05dbd2e43d5e17eb420dfbb925bb1524d6f37b372861047b8174153b1c0"} Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.743004 5029 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.743031 5029 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.743356 5029 status_manager.go:851] "Failed to get status for pod" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" pod="openshift-marketplace/community-operators-494x8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-494x8\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:16 crc kubenswrapper[5029]: E0313 20:33:16.743566 5029 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.743582 5029 status_manager.go:851] "Failed to get status for pod" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" pod="openshift-marketplace/redhat-operators-vpzl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vpzl2\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.743774 5029 status_manager.go:851] "Failed to get status for pod" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" pod="openshift-marketplace/community-operators-kl2lj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kl2lj\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.744033 5029 status_manager.go:851] "Failed to get status for pod" podUID="151390c1-ebb0-49bf-be99-3326fc839781" pod="openshift-marketplace/redhat-operators-s58vt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s58vt\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:16 crc kubenswrapper[5029]: I0313 20:33:16.744304 5029 status_manager.go:851] "Failed to get status for pod" podUID="5457502b-4e3f-463b-87ae-4013109d2298" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Mar 13 20:33:17 crc kubenswrapper[5029]: I0313 20:33:17.755526 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dba922a97f7c7f63546b00a542680e92a138b8741dccd589182db3a646fd5364"} Mar 13 20:33:17 crc kubenswrapper[5029]: I0313 20:33:17.756158 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ed8c00381466b92252f9bf3fb27530c8eb798a98841537634de6186a63df959b"} Mar 13 20:33:17 crc kubenswrapper[5029]: I0313 20:33:17.756177 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a798059cd2bf226307910aee1d775f746ffc926bc5cd98647e88763b993157a9"} Mar 13 20:33:17 crc kubenswrapper[5029]: I0313 20:33:17.756188 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"18dbf6938126d00458bc0cefe8f0acc0f24d72fe20c6850eda7eadfd17754385"} Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.765373 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.766118 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.766190 5029 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7" exitCode=1 Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.766276 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7"} Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.766927 5029 scope.go:117] "RemoveContainer" containerID="a88a70db55f1e346d289b275a1bf35220fd5c7d5975454653285bda0352f99f7" Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.770789 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c8827f09113806b29aae5b9d80414ce2eba04237bb2d39f9cd8e5f8ef36db250"} Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.771058 5029 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.771081 5029 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:18 crc kubenswrapper[5029]: I0313 20:33:18.771310 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:19 crc kubenswrapper[5029]: I0313 20:33:19.606468 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:33:19 crc kubenswrapper[5029]: I0313 20:33:19.781336 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 13 20:33:19 crc kubenswrapper[5029]: I0313 20:33:19.782098 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 13 20:33:19 crc kubenswrapper[5029]: I0313 20:33:19.782191 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7d0e4dc51dbce3999adcf8c1de64e8f813bebcd833af042f3402046832c61da4"} Mar 13 20:33:20 crc kubenswrapper[5029]: I0313 20:33:20.628761 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:20 crc kubenswrapper[5029]: I0313 20:33:20.628821 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:20 crc kubenswrapper[5029]: I0313 20:33:20.634051 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:23 crc kubenswrapper[5029]: I0313 20:33:23.786738 5029 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:23 crc kubenswrapper[5029]: I0313 20:33:23.810963 5029 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:23 crc kubenswrapper[5029]: I0313 20:33:23.811000 5029 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:23 crc kubenswrapper[5029]: I0313 20:33:23.816383 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:23 crc kubenswrapper[5029]: I0313 20:33:23.937734 5029 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="662af66f-0e99-4f49-8028-4acd295d53a7" Mar 13 20:33:24 crc kubenswrapper[5029]: I0313 20:33:24.316947 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:33:24 crc kubenswrapper[5029]: I0313 20:33:24.321583 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:33:24 crc kubenswrapper[5029]: I0313 20:33:24.817746 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:33:24 crc kubenswrapper[5029]: I0313 20:33:24.818231 5029 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:24 crc kubenswrapper[5029]: I0313 20:33:24.818269 5029 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="156f2844-a3fc-4b2b-affe-2340ca467835" Mar 13 20:33:24 crc kubenswrapper[5029]: I0313 20:33:24.821611 5029 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="662af66f-0e99-4f49-8028-4acd295d53a7" Mar 13 20:33:29 crc kubenswrapper[5029]: I0313 20:33:29.609844 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 13 20:33:31 crc kubenswrapper[5029]: I0313 20:33:31.071882 5029 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 20:33:31 crc kubenswrapper[5029]: I0313 20:33:31.306937 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 20:33:31 crc kubenswrapper[5029]: I0313 20:33:31.355752 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 20:33:31 crc kubenswrapper[5029]: I0313 20:33:31.760636 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 13 20:33:31 crc kubenswrapper[5029]: I0313 20:33:31.929630 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 20:33:31 crc kubenswrapper[5029]: I0313 20:33:31.980045 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 20:33:32 crc kubenswrapper[5029]: I0313 20:33:32.139476 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 20:33:32 crc kubenswrapper[5029]: I0313 20:33:32.553730 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 20:33:32 crc kubenswrapper[5029]: I0313 20:33:32.583593 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 20:33:33 crc kubenswrapper[5029]: I0313 20:33:33.108173 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 20:33:33 crc kubenswrapper[5029]: I0313 20:33:33.599055 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 20:33:33 crc kubenswrapper[5029]: I0313 20:33:33.695750 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 20:33:33 crc kubenswrapper[5029]: I0313 20:33:33.772285 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 20:33:34 crc kubenswrapper[5029]: I0313 20:33:34.091211 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 13 20:33:34 crc kubenswrapper[5029]: I0313 20:33:34.126363 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 20:33:34 crc kubenswrapper[5029]: I0313 20:33:34.211780 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 20:33:34 crc kubenswrapper[5029]: I0313 20:33:34.291669 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 20:33:34 crc kubenswrapper[5029]: I0313 20:33:34.740237 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.146910 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.155489 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.500300 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.527831 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.644571 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.656089 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.713467 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.767913 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.936926 5029 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.939697 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 20:33:35 crc kubenswrapper[5029]: I0313 20:33:35.946000 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 20:33:36 crc kubenswrapper[5029]: I0313 20:33:36.039155 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 20:33:36 crc kubenswrapper[5029]: I0313 20:33:36.373513 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 20:33:36 crc kubenswrapper[5029]: I0313 20:33:36.440805 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 20:33:36 crc kubenswrapper[5029]: I0313 20:33:36.911953 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 20:33:37 crc kubenswrapper[5029]: I0313 20:33:37.022960 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 20:33:37 crc kubenswrapper[5029]: I0313 20:33:37.137908 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 20:33:37 crc kubenswrapper[5029]: I0313 20:33:37.262381 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 20:33:37 crc kubenswrapper[5029]: I0313 20:33:37.272544 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 20:33:37 crc kubenswrapper[5029]: I0313 20:33:37.811236 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 20:33:37 crc kubenswrapper[5029]: I0313 20:33:37.835813 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 20:33:37 crc kubenswrapper[5029]: I0313 20:33:37.850753 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.067960 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.092876 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.207018 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.287289 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.331438 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.342434 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.683431 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.738987 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.934149 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 20:33:38 crc kubenswrapper[5029]: I0313 20:33:38.994871 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.037146 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.069743 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.186589 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.205923 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.273629 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.299122 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.318511 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.388157 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.465241 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.564899 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.588292 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.774390 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.847474 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.925022 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 20:33:39 crc kubenswrapper[5029]: I0313 20:33:39.951624 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.125879 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.179551 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.198411 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.227360 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.267172 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.336067 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.347133 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.415198 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.525455 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.582998 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.615986 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.678881 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.684577 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.689156 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.704352 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.870872 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.873901 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 13 20:33:40 crc kubenswrapper[5029]: I0313 20:33:40.913096 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.012473 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.061628 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.091947 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.119511 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.128788 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.133038 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.166830 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.276886 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.352259 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.414065 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.463183 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.586907 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.829338 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.895549 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.917535 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.935833 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.957375 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.963943 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.982284 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 20:33:41 crc kubenswrapper[5029]: I0313 20:33:41.994331 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.028422 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.045890 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.046038 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.048086 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.064682 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.074660 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.187804 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.274747 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.364318 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.407264 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.444303 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.457837 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.500629 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.520963 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.532946 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.583745 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.599914 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.629176 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.643487 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.688507 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.721838 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.795170 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.815925 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.834510 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.882087 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.882220 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.964568 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 13 20:33:42 crc kubenswrapper[5029]: I0313 20:33:42.972625 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.028176 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.076492 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.143768 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.226351 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.255456 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.268347 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.422558 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.423755 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.428803 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.450424 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.708479 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.724336 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.738560 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.791404 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.848483 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.849965 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 20:33:43 crc kubenswrapper[5029]: I0313 20:33:43.914937 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.017055 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.048755 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.173840 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.175762 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.364761 5029 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.430016 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.548517 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.579453 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.814334 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.845462 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.847534 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.862536 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.944003 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 20:33:44 crc kubenswrapper[5029]: I0313 20:33:44.994647 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.040819 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.070439 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.163502 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.206153 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.283780 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.305975 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.358674 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.404612 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.460492 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.462314 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.468594 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.511896 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.547750 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.676381 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.811410 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.886478 5029 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.891888 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.891969 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.899101 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.926840 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.951917 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 20:33:45 crc kubenswrapper[5029]: I0313 20:33:45.952112 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.95209263 podStartE2EDuration="22.95209263s" podCreationTimestamp="2026-03-13 20:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:33:45.922737359 +0000 UTC m=+385.938819772" watchObservedRunningTime="2026-03-13 20:33:45.95209263 +0000 UTC m=+385.968175033" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.067740 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.117093 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.133460 5029 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.133766 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9" gracePeriod=5 Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.148764 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.172844 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.181484 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.229293 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.243032 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.266685 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.493411 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.497157 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.548635 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.598604 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.650409 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.664699 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.726827 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.793706 5029 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.801154 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.820976 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.851005 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 20:33:46 crc kubenswrapper[5029]: I0313 20:33:46.895740 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.019560 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.087228 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.140151 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.140162 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.150845 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.157505 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.163619 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.174627 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.367069 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.463147 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.468136 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.520675 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.581765 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.645087 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.709582 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.815563 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 20:33:47 crc kubenswrapper[5029]: I0313 20:33:47.971776 5029 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.001112 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.069026 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.079820 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.201173 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.266933 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.377434 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.467686 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.468138 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.679787 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.715086 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.754867 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.757193 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.773029 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.846717 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.856325 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.929438 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 13 20:33:48 crc kubenswrapper[5029]: I0313 20:33:48.994241 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 20:33:49 crc kubenswrapper[5029]: I0313 20:33:49.065961 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 20:33:49 crc kubenswrapper[5029]: I0313 20:33:49.081575 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 20:33:49 crc kubenswrapper[5029]: I0313 20:33:49.196056 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 20:33:49 crc kubenswrapper[5029]: I0313 20:33:49.850556 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 13 20:33:49 crc kubenswrapper[5029]: I0313 20:33:49.863051 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 20:33:50 crc kubenswrapper[5029]: I0313 20:33:50.280561 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 20:33:50 crc kubenswrapper[5029]: I0313 20:33:50.358172 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 13 20:33:50 crc kubenswrapper[5029]: I0313 20:33:50.465098 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 20:33:50 crc kubenswrapper[5029]: I0313 20:33:50.948247 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.330107 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.714222 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.718709 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.749264 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.749350 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.786436 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.820410 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b86d6b979-htnb9"] Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.820844 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" podUID="8a94b9df-557f-42b1-9421-46b17536bafc" containerName="controller-manager" containerID="cri-o://0fcec0839b30e5d78e4ecabcfc5cdf213599a73cc0447991830fe63932dac556" gracePeriod=30 Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.826012 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j"] Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.826271 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" podUID="9aa5037c-7a78-4c74-99c8-27e93342fe37" containerName="route-controller-manager" containerID="cri-o://91df90316164973f84d0c0bec1d9b74568b64d0a2b8bfda8990c1bd7054cc4c9" gracePeriod=30 Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937183 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937288 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937426 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937460 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937502 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937598 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937578 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937666 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937914 5029 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937935 5029 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937945 5029 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.937967 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:51 crc kubenswrapper[5029]: I0313 20:33:51.948295 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.016667 5029 generic.go:334] "Generic (PLEG): container finished" podID="8a94b9df-557f-42b1-9421-46b17536bafc" containerID="0fcec0839b30e5d78e4ecabcfc5cdf213599a73cc0447991830fe63932dac556" exitCode=0 Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.016760 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" event={"ID":"8a94b9df-557f-42b1-9421-46b17536bafc","Type":"ContainerDied","Data":"0fcec0839b30e5d78e4ecabcfc5cdf213599a73cc0447991830fe63932dac556"} Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.018667 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.018732 5029 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9" exitCode=137 Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.018783 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.018875 5029 scope.go:117] "RemoveContainer" containerID="88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.020968 5029 generic.go:334] "Generic (PLEG): container finished" podID="9aa5037c-7a78-4c74-99c8-27e93342fe37" containerID="91df90316164973f84d0c0bec1d9b74568b64d0a2b8bfda8990c1bd7054cc4c9" exitCode=0 Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.021008 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" event={"ID":"9aa5037c-7a78-4c74-99c8-27e93342fe37","Type":"ContainerDied","Data":"91df90316164973f84d0c0bec1d9b74568b64d0a2b8bfda8990c1bd7054cc4c9"} Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.038519 5029 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.038557 5029 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.041347 5029 scope.go:117] "RemoveContainer" containerID="88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9" Mar 13 20:33:52 crc kubenswrapper[5029]: E0313 20:33:52.041878 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9\": container with ID starting with 88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9 not found: ID does not exist" containerID="88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.041950 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9"} err="failed to get container status \"88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9\": rpc error: code = NotFound desc = could not find container \"88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9\": container with ID starting with 88744a4d3804828ff3160f9fcf3e4d7873df8f34db5270deb817a1d0a72ef2c9 not found: ID does not exist" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.152921 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.194637 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.322618 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.328128 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343320 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-proxy-ca-bundles\") pod \"8a94b9df-557f-42b1-9421-46b17536bafc\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343399 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmd6q\" (UniqueName: \"kubernetes.io/projected/8a94b9df-557f-42b1-9421-46b17536bafc-kube-api-access-nmd6q\") pod \"8a94b9df-557f-42b1-9421-46b17536bafc\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343506 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjjrr\" (UniqueName: \"kubernetes.io/projected/9aa5037c-7a78-4c74-99c8-27e93342fe37-kube-api-access-gjjrr\") pod \"9aa5037c-7a78-4c74-99c8-27e93342fe37\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343582 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-config\") pod \"8a94b9df-557f-42b1-9421-46b17536bafc\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343638 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-client-ca\") pod \"9aa5037c-7a78-4c74-99c8-27e93342fe37\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343683 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-config\") pod \"9aa5037c-7a78-4c74-99c8-27e93342fe37\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343708 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-client-ca\") pod \"8a94b9df-557f-42b1-9421-46b17536bafc\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343804 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa5037c-7a78-4c74-99c8-27e93342fe37-serving-cert\") pod \"9aa5037c-7a78-4c74-99c8-27e93342fe37\" (UID: \"9aa5037c-7a78-4c74-99c8-27e93342fe37\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.343828 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a94b9df-557f-42b1-9421-46b17536bafc-serving-cert\") pod \"8a94b9df-557f-42b1-9421-46b17536bafc\" (UID: \"8a94b9df-557f-42b1-9421-46b17536bafc\") " Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.344400 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8a94b9df-557f-42b1-9421-46b17536bafc" (UID: "8a94b9df-557f-42b1-9421-46b17536bafc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.345025 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-client-ca" (OuterVolumeSpecName: "client-ca") pod "8a94b9df-557f-42b1-9421-46b17536bafc" (UID: "8a94b9df-557f-42b1-9421-46b17536bafc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.345057 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-client-ca" (OuterVolumeSpecName: "client-ca") pod "9aa5037c-7a78-4c74-99c8-27e93342fe37" (UID: "9aa5037c-7a78-4c74-99c8-27e93342fe37"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.345163 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-config" (OuterVolumeSpecName: "config") pod "9aa5037c-7a78-4c74-99c8-27e93342fe37" (UID: "9aa5037c-7a78-4c74-99c8-27e93342fe37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.345583 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-config" (OuterVolumeSpecName: "config") pod "8a94b9df-557f-42b1-9421-46b17536bafc" (UID: "8a94b9df-557f-42b1-9421-46b17536bafc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.352357 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a94b9df-557f-42b1-9421-46b17536bafc-kube-api-access-nmd6q" (OuterVolumeSpecName: "kube-api-access-nmd6q") pod "8a94b9df-557f-42b1-9421-46b17536bafc" (UID: "8a94b9df-557f-42b1-9421-46b17536bafc"). InnerVolumeSpecName "kube-api-access-nmd6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.354593 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa5037c-7a78-4c74-99c8-27e93342fe37-kube-api-access-gjjrr" (OuterVolumeSpecName: "kube-api-access-gjjrr") pod "9aa5037c-7a78-4c74-99c8-27e93342fe37" (UID: "9aa5037c-7a78-4c74-99c8-27e93342fe37"). InnerVolumeSpecName "kube-api-access-gjjrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.357352 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa5037c-7a78-4c74-99c8-27e93342fe37-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9aa5037c-7a78-4c74-99c8-27e93342fe37" (UID: "9aa5037c-7a78-4c74-99c8-27e93342fe37"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.362247 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a94b9df-557f-42b1-9421-46b17536bafc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8a94b9df-557f-42b1-9421-46b17536bafc" (UID: "8a94b9df-557f-42b1-9421-46b17536bafc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.387743 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.450824 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa5037c-7a78-4c74-99c8-27e93342fe37-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.450910 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a94b9df-557f-42b1-9421-46b17536bafc-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.450926 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.450955 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmd6q\" (UniqueName: \"kubernetes.io/projected/8a94b9df-557f-42b1-9421-46b17536bafc-kube-api-access-nmd6q\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.450974 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjjrr\" (UniqueName: \"kubernetes.io/projected/9aa5037c-7a78-4c74-99c8-27e93342fe37-kube-api-access-gjjrr\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.450990 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.451006 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.451027 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa5037c-7a78-4c74-99c8-27e93342fe37-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.451040 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a94b9df-557f-42b1-9421-46b17536bafc-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.601363 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 20:33:52 crc kubenswrapper[5029]: I0313 20:33:52.609574 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.030575 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" event={"ID":"9aa5037c-7a78-4c74-99c8-27e93342fe37","Type":"ContainerDied","Data":"642b2abee1d7ba45a786af03c60f10ffe52d47313440e6338ba53409d0fc58c8"} Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.030942 5029 scope.go:117] "RemoveContainer" containerID="91df90316164973f84d0c0bec1d9b74568b64d0a2b8bfda8990c1bd7054cc4c9" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.031181 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.033325 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" event={"ID":"8a94b9df-557f-42b1-9421-46b17536bafc","Type":"ContainerDied","Data":"f44462996df94b37a0f81c65a558af61c66d188224d95aca520557607d01447a"} Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.033365 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b86d6b979-htnb9" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.054627 5029 scope.go:117] "RemoveContainer" containerID="0fcec0839b30e5d78e4ecabcfc5cdf213599a73cc0447991830fe63932dac556" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.055819 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b86d6b979-htnb9"] Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.060986 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b86d6b979-htnb9"] Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.074458 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j"] Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.079413 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc646b7d9-bjh6j"] Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.112321 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.822947 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv"] Mar 13 20:33:53 crc kubenswrapper[5029]: E0313 20:33:53.823333 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a94b9df-557f-42b1-9421-46b17536bafc" containerName="controller-manager" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.823378 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a94b9df-557f-42b1-9421-46b17536bafc" containerName="controller-manager" Mar 13 20:33:53 crc kubenswrapper[5029]: E0313 20:33:53.823400 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa5037c-7a78-4c74-99c8-27e93342fe37" containerName="route-controller-manager" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.823408 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa5037c-7a78-4c74-99c8-27e93342fe37" containerName="route-controller-manager" Mar 13 20:33:53 crc kubenswrapper[5029]: E0313 20:33:53.823422 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5457502b-4e3f-463b-87ae-4013109d2298" containerName="installer" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.823430 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="5457502b-4e3f-463b-87ae-4013109d2298" containerName="installer" Mar 13 20:33:53 crc kubenswrapper[5029]: E0313 20:33:53.823448 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.823456 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.823594 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="5457502b-4e3f-463b-87ae-4013109d2298" containerName="installer" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.823610 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.823621 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa5037c-7a78-4c74-99c8-27e93342fe37" containerName="route-controller-manager" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.823633 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a94b9df-557f-42b1-9421-46b17536bafc" containerName="controller-manager" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.824621 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.828663 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.830462 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.833012 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw"] Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.833254 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.833608 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.833918 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.834173 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.834318 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.838677 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.839592 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw"] Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.839808 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.842991 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.843054 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.843134 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.843394 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.843528 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.843892 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv"] Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.873698 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxbl\" (UniqueName: \"kubernetes.io/projected/41f301ac-a1da-46a4-8314-6b1fcfe865a1-kube-api-access-6vxbl\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.873897 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96d65468-24a4-4b08-95b5-7b0709eb0698-serving-cert\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.874011 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f301ac-a1da-46a4-8314-6b1fcfe865a1-serving-cert\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.874566 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-config\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.874715 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d65468-24a4-4b08-95b5-7b0709eb0698-config\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.874910 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47qw\" (UniqueName: \"kubernetes.io/projected/96d65468-24a4-4b08-95b5-7b0709eb0698-kube-api-access-f47qw\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.874998 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96d65468-24a4-4b08-95b5-7b0709eb0698-client-ca\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.888045 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-client-ca\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.888262 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-proxy-ca-bundles\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989606 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47qw\" (UniqueName: \"kubernetes.io/projected/96d65468-24a4-4b08-95b5-7b0709eb0698-kube-api-access-f47qw\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989674 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d65468-24a4-4b08-95b5-7b0709eb0698-config\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989705 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96d65468-24a4-4b08-95b5-7b0709eb0698-client-ca\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989732 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-client-ca\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989773 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-proxy-ca-bundles\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989810 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxbl\" (UniqueName: \"kubernetes.io/projected/41f301ac-a1da-46a4-8314-6b1fcfe865a1-kube-api-access-6vxbl\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989841 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96d65468-24a4-4b08-95b5-7b0709eb0698-serving-cert\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989889 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f301ac-a1da-46a4-8314-6b1fcfe865a1-serving-cert\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.989940 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-config\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.991179 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-client-ca\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.991496 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d65468-24a4-4b08-95b5-7b0709eb0698-config\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.992687 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-proxy-ca-bundles\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.993626 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-config\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:53 crc kubenswrapper[5029]: I0313 20:33:53.994204 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96d65468-24a4-4b08-95b5-7b0709eb0698-client-ca\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:53.996834 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96d65468-24a4-4b08-95b5-7b0709eb0698-serving-cert\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:53.997046 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f301ac-a1da-46a4-8314-6b1fcfe865a1-serving-cert\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:54.007784 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47qw\" (UniqueName: \"kubernetes.io/projected/96d65468-24a4-4b08-95b5-7b0709eb0698-kube-api-access-f47qw\") pod \"route-controller-manager-8c8544d9b-g2clw\" (UID: \"96d65468-24a4-4b08-95b5-7b0709eb0698\") " pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:54.012548 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxbl\" (UniqueName: \"kubernetes.io/projected/41f301ac-a1da-46a4-8314-6b1fcfe865a1-kube-api-access-6vxbl\") pod \"controller-manager-86d9d4c6cb-9m6dv\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:54.147711 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:54.162940 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:54.431032 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv"] Mar 13 20:33:54 crc kubenswrapper[5029]: W0313 20:33:54.443351 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f301ac_a1da_46a4_8314_6b1fcfe865a1.slice/crio-30f9d3b0d563ab9bb21625968f7f4d90979e0f5bd215a1324afe967290a5f313 WatchSource:0}: Error finding container 30f9d3b0d563ab9bb21625968f7f4d90979e0f5bd215a1324afe967290a5f313: Status 404 returned error can't find the container with id 30f9d3b0d563ab9bb21625968f7f4d90979e0f5bd215a1324afe967290a5f313 Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:54.483172 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw"] Mar 13 20:33:54 crc kubenswrapper[5029]: W0313 20:33:54.493330 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96d65468_24a4_4b08_95b5_7b0709eb0698.slice/crio-4aa3d5138ab16e82b335aee9367998fac97e563e62d3456bd54ead5ef8df7584 WatchSource:0}: Error finding container 4aa3d5138ab16e82b335aee9367998fac97e563e62d3456bd54ead5ef8df7584: Status 404 returned error can't find the container with id 4aa3d5138ab16e82b335aee9367998fac97e563e62d3456bd54ead5ef8df7584 Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:54.606567 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a94b9df-557f-42b1-9421-46b17536bafc" path="/var/lib/kubelet/pods/8a94b9df-557f-42b1-9421-46b17536bafc/volumes" Mar 13 20:33:54 crc kubenswrapper[5029]: I0313 20:33:54.607896 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa5037c-7a78-4c74-99c8-27e93342fe37" path="/var/lib/kubelet/pods/9aa5037c-7a78-4c74-99c8-27e93342fe37/volumes" Mar 13 20:33:55 crc kubenswrapper[5029]: I0313 20:33:55.052346 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" event={"ID":"41f301ac-a1da-46a4-8314-6b1fcfe865a1","Type":"ContainerStarted","Data":"5723f025091b10062122c4e6a998b1cbd30363f9ad993a66182b11851323dd75"} Mar 13 20:33:55 crc kubenswrapper[5029]: I0313 20:33:55.052407 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" event={"ID":"41f301ac-a1da-46a4-8314-6b1fcfe865a1","Type":"ContainerStarted","Data":"30f9d3b0d563ab9bb21625968f7f4d90979e0f5bd215a1324afe967290a5f313"} Mar 13 20:33:55 crc kubenswrapper[5029]: I0313 20:33:55.054029 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" event={"ID":"96d65468-24a4-4b08-95b5-7b0709eb0698","Type":"ContainerStarted","Data":"fe5051b57d63f0b9548a6e19ac45675f43b861e0719fbcf05dcc6bc691fd4de7"} Mar 13 20:33:55 crc kubenswrapper[5029]: I0313 20:33:55.054107 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" event={"ID":"96d65468-24a4-4b08-95b5-7b0709eb0698","Type":"ContainerStarted","Data":"4aa3d5138ab16e82b335aee9367998fac97e563e62d3456bd54ead5ef8df7584"} Mar 13 20:33:55 crc kubenswrapper[5029]: I0313 20:33:55.054160 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:55 crc kubenswrapper[5029]: I0313 20:33:55.099880 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" podStartSLOduration=4.099839995 podStartE2EDuration="4.099839995s" podCreationTimestamp="2026-03-13 20:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:33:55.078825262 +0000 UTC m=+395.094907665" watchObservedRunningTime="2026-03-13 20:33:55.099839995 +0000 UTC m=+395.115922398" Mar 13 20:33:55 crc kubenswrapper[5029]: I0313 20:33:55.227125 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" Mar 13 20:33:55 crc kubenswrapper[5029]: I0313 20:33:55.246327 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8c8544d9b-g2clw" podStartSLOduration=4.246302916 podStartE2EDuration="4.246302916s" podCreationTimestamp="2026-03-13 20:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:33:55.103066672 +0000 UTC m=+395.119149075" watchObservedRunningTime="2026-03-13 20:33:55.246302916 +0000 UTC m=+395.262385319" Mar 13 20:33:56 crc kubenswrapper[5029]: I0313 20:33:56.060915 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:33:56 crc kubenswrapper[5029]: I0313 20:33:56.066410 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.166160 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557234-j9kj4"] Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.167264 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557234-j9kj4" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.171107 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.171330 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.171454 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.173367 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557234-j9kj4"] Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.293093 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzv98\" (UniqueName: \"kubernetes.io/projected/cbe71349-ba8d-4f87-9e80-2d6a5417b5be-kube-api-access-nzv98\") pod \"auto-csr-approver-29557234-j9kj4\" (UID: \"cbe71349-ba8d-4f87-9e80-2d6a5417b5be\") " pod="openshift-infra/auto-csr-approver-29557234-j9kj4" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.395886 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzv98\" (UniqueName: \"kubernetes.io/projected/cbe71349-ba8d-4f87-9e80-2d6a5417b5be-kube-api-access-nzv98\") pod \"auto-csr-approver-29557234-j9kj4\" (UID: \"cbe71349-ba8d-4f87-9e80-2d6a5417b5be\") " pod="openshift-infra/auto-csr-approver-29557234-j9kj4" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.419737 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzv98\" (UniqueName: \"kubernetes.io/projected/cbe71349-ba8d-4f87-9e80-2d6a5417b5be-kube-api-access-nzv98\") pod \"auto-csr-approver-29557234-j9kj4\" (UID: \"cbe71349-ba8d-4f87-9e80-2d6a5417b5be\") " pod="openshift-infra/auto-csr-approver-29557234-j9kj4" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.535004 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557234-j9kj4" Mar 13 20:34:00 crc kubenswrapper[5029]: I0313 20:34:00.969933 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557234-j9kj4"] Mar 13 20:34:01 crc kubenswrapper[5029]: I0313 20:34:01.097698 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557234-j9kj4" event={"ID":"cbe71349-ba8d-4f87-9e80-2d6a5417b5be","Type":"ContainerStarted","Data":"4c5f028cb62521f979636e8122638e6dce3fa65c9a71b2164f03ac6fde5f13d8"} Mar 13 20:34:03 crc kubenswrapper[5029]: I0313 20:34:03.112128 5029 generic.go:334] "Generic (PLEG): container finished" podID="cbe71349-ba8d-4f87-9e80-2d6a5417b5be" containerID="ac984f041fa5cdbd8477204cc564aaa4562e343cc3e0da05d15ac924439238b8" exitCode=0 Mar 13 20:34:03 crc kubenswrapper[5029]: I0313 20:34:03.112208 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557234-j9kj4" event={"ID":"cbe71349-ba8d-4f87-9e80-2d6a5417b5be","Type":"ContainerDied","Data":"ac984f041fa5cdbd8477204cc564aaa4562e343cc3e0da05d15ac924439238b8"} Mar 13 20:34:04 crc kubenswrapper[5029]: I0313 20:34:04.510886 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557234-j9kj4" Mar 13 20:34:04 crc kubenswrapper[5029]: I0313 20:34:04.553211 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzv98\" (UniqueName: \"kubernetes.io/projected/cbe71349-ba8d-4f87-9e80-2d6a5417b5be-kube-api-access-nzv98\") pod \"cbe71349-ba8d-4f87-9e80-2d6a5417b5be\" (UID: \"cbe71349-ba8d-4f87-9e80-2d6a5417b5be\") " Mar 13 20:34:04 crc kubenswrapper[5029]: I0313 20:34:04.561061 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe71349-ba8d-4f87-9e80-2d6a5417b5be-kube-api-access-nzv98" (OuterVolumeSpecName: "kube-api-access-nzv98") pod "cbe71349-ba8d-4f87-9e80-2d6a5417b5be" (UID: "cbe71349-ba8d-4f87-9e80-2d6a5417b5be"). InnerVolumeSpecName "kube-api-access-nzv98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:04 crc kubenswrapper[5029]: I0313 20:34:04.654538 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzv98\" (UniqueName: \"kubernetes.io/projected/cbe71349-ba8d-4f87-9e80-2d6a5417b5be-kube-api-access-nzv98\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.138175 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557234-j9kj4" event={"ID":"cbe71349-ba8d-4f87-9e80-2d6a5417b5be","Type":"ContainerDied","Data":"4c5f028cb62521f979636e8122638e6dce3fa65c9a71b2164f03ac6fde5f13d8"} Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.138471 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5f028cb62521f979636e8122638e6dce3fa65c9a71b2164f03ac6fde5f13d8" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.138261 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557234-j9kj4" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.416146 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz4wv"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.416687 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qz4wv" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerName="registry-server" containerID="cri-o://f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5" gracePeriod=30 Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.429015 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-494x8"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.429334 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-494x8" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerName="registry-server" containerID="cri-o://d1842e0b2d093158ee852e0b3bc2ec06d11a44ae408f34bb65470916033ed1e4" gracePeriod=30 Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.443919 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kl2lj"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.444281 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kl2lj" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerName="registry-server" containerID="cri-o://8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96" gracePeriod=30 Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.451655 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zb64j"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.452151 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" podUID="8a1ea22d-3be3-412d-be38-ab360aae90e5" containerName="marketplace-operator" containerID="cri-o://ad8dab26cfeba72cdee42cdffe8732ceb3ebfa648dc0198963af9da8b5eff610" gracePeriod=30 Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.476511 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg7pb"] Mar 13 20:34:05 crc kubenswrapper[5029]: E0313 20:34:05.476840 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe71349-ba8d-4f87-9e80-2d6a5417b5be" containerName="oc" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.476930 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe71349-ba8d-4f87-9e80-2d6a5417b5be" containerName="oc" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.477052 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe71349-ba8d-4f87-9e80-2d6a5417b5be" containerName="oc" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.477684 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.485065 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg7pb"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.492452 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xlnz"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.492874 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2xlnz" podUID="553bdc43-797f-401f-9ca0-875060ab0553" containerName="registry-server" containerID="cri-o://29f97a6ac0965e8116f60cf3e39b0f1d0c9e462aaf94fce947d5c7877b0ead80" gracePeriod=30 Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.512406 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhg5r"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.512811 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dhg5r" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" containerName="registry-server" containerID="cri-o://cbf2d14681501515c511963c2ec733c6ad3f6bc3a8be17580ae7738bca956844" gracePeriod=30 Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.520715 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s58vt"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.521166 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s58vt" podUID="151390c1-ebb0-49bf-be99-3326fc839781" containerName="registry-server" containerID="cri-o://8cc01f9bedca0104f539695ae721b0dc61aab86685a7c3bfe37ba10fdfd2ee5c" gracePeriod=30 Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.534239 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpzl2"] Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.534655 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vpzl2" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="registry-server" containerID="cri-o://ea431d9d073770b052681e5acfa214dc6ca5c51dc6e4ecff60dfab60fd2f9387" gracePeriod=30 Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.566609 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e6a8cc11-fafe-4b5a-a194-61c8680f0585-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.566691 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6a8cc11-fafe-4b5a-a194-61c8680f0585-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.566723 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmn7\" (UniqueName: \"kubernetes.io/projected/e6a8cc11-fafe-4b5a-a194-61c8680f0585-kube-api-access-rfmn7\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.668486 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e6a8cc11-fafe-4b5a-a194-61c8680f0585-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.668543 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6a8cc11-fafe-4b5a-a194-61c8680f0585-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.668581 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfmn7\" (UniqueName: \"kubernetes.io/projected/e6a8cc11-fafe-4b5a-a194-61c8680f0585-kube-api-access-rfmn7\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.677902 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6a8cc11-fafe-4b5a-a194-61c8680f0585-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.680330 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e6a8cc11-fafe-4b5a-a194-61c8680f0585-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:05 crc kubenswrapper[5029]: I0313 20:34:05.693961 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfmn7\" (UniqueName: \"kubernetes.io/projected/e6a8cc11-fafe-4b5a-a194-61c8680f0585-kube-api-access-rfmn7\") pod \"marketplace-operator-79b997595-vg7pb\" (UID: \"e6a8cc11-fafe-4b5a-a194-61c8680f0585\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.066934 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.073276 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.138083 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.157548 5029 generic.go:334] "Generic (PLEG): container finished" podID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerID="8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96" exitCode=0 Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.157649 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kl2lj" event={"ID":"e33b18fb-9cd7-4c30-bdb0-402734c47cc8","Type":"ContainerDied","Data":"8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.157692 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kl2lj" event={"ID":"e33b18fb-9cd7-4c30-bdb0-402734c47cc8","Type":"ContainerDied","Data":"b5a162d06d896d138b3974572d22f746ddbf51300ea859bf8080250959d83a1c"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.157702 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kl2lj" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.157717 5029 scope.go:117] "RemoveContainer" containerID="8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.164839 5029 generic.go:334] "Generic (PLEG): container finished" podID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerID="ea431d9d073770b052681e5acfa214dc6ca5c51dc6e4ecff60dfab60fd2f9387" exitCode=0 Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.165205 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpzl2" event={"ID":"5760820d-9df0-4f3e-b14f-1c64e2607ecd","Type":"ContainerDied","Data":"ea431d9d073770b052681e5acfa214dc6ca5c51dc6e4ecff60dfab60fd2f9387"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.167796 5029 generic.go:334] "Generic (PLEG): container finished" podID="553bdc43-797f-401f-9ca0-875060ab0553" containerID="29f97a6ac0965e8116f60cf3e39b0f1d0c9e462aaf94fce947d5c7877b0ead80" exitCode=0 Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.167868 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xlnz" event={"ID":"553bdc43-797f-401f-9ca0-875060ab0553","Type":"ContainerDied","Data":"29f97a6ac0965e8116f60cf3e39b0f1d0c9e462aaf94fce947d5c7877b0ead80"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.169808 5029 generic.go:334] "Generic (PLEG): container finished" podID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerID="d1842e0b2d093158ee852e0b3bc2ec06d11a44ae408f34bb65470916033ed1e4" exitCode=0 Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.169897 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-494x8" event={"ID":"3c8fadb2-962e-4bca-8305-a51b8d2334bb","Type":"ContainerDied","Data":"d1842e0b2d093158ee852e0b3bc2ec06d11a44ae408f34bb65470916033ed1e4"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.176384 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66klh\" (UniqueName: \"kubernetes.io/projected/e2f9d5d5-9771-4294-961f-110aa2430e29-kube-api-access-66klh\") pod \"e2f9d5d5-9771-4294-961f-110aa2430e29\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.176445 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-catalog-content\") pod \"e2f9d5d5-9771-4294-961f-110aa2430e29\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.176569 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-utilities\") pod \"e2f9d5d5-9771-4294-961f-110aa2430e29\" (UID: \"e2f9d5d5-9771-4294-961f-110aa2430e29\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.178904 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-utilities" (OuterVolumeSpecName: "utilities") pod "e2f9d5d5-9771-4294-961f-110aa2430e29" (UID: "e2f9d5d5-9771-4294-961f-110aa2430e29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.182221 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f9d5d5-9771-4294-961f-110aa2430e29-kube-api-access-66klh" (OuterVolumeSpecName: "kube-api-access-66klh") pod "e2f9d5d5-9771-4294-961f-110aa2430e29" (UID: "e2f9d5d5-9771-4294-961f-110aa2430e29"). InnerVolumeSpecName "kube-api-access-66klh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.183497 5029 generic.go:334] "Generic (PLEG): container finished" podID="151390c1-ebb0-49bf-be99-3326fc839781" containerID="8cc01f9bedca0104f539695ae721b0dc61aab86685a7c3bfe37ba10fdfd2ee5c" exitCode=0 Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.183611 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s58vt" event={"ID":"151390c1-ebb0-49bf-be99-3326fc839781","Type":"ContainerDied","Data":"8cc01f9bedca0104f539695ae721b0dc61aab86685a7c3bfe37ba10fdfd2ee5c"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.187342 5029 generic.go:334] "Generic (PLEG): container finished" podID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerID="f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5" exitCode=0 Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.187404 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz4wv" event={"ID":"e2f9d5d5-9771-4294-961f-110aa2430e29","Type":"ContainerDied","Data":"f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.187436 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz4wv" event={"ID":"e2f9d5d5-9771-4294-961f-110aa2430e29","Type":"ContainerDied","Data":"00a5f80e1538f209406fd87a2ae3d8a5e1bae524bf542d10345838fe04b5283b"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.187506 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz4wv" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.190154 5029 generic.go:334] "Generic (PLEG): container finished" podID="8a1ea22d-3be3-412d-be38-ab360aae90e5" containerID="ad8dab26cfeba72cdee42cdffe8732ceb3ebfa648dc0198963af9da8b5eff610" exitCode=0 Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.190220 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" event={"ID":"8a1ea22d-3be3-412d-be38-ab360aae90e5","Type":"ContainerDied","Data":"ad8dab26cfeba72cdee42cdffe8732ceb3ebfa648dc0198963af9da8b5eff610"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.193505 5029 generic.go:334] "Generic (PLEG): container finished" podID="866c95e1-566b-4e67-8822-b6c182cb3378" containerID="cbf2d14681501515c511963c2ec733c6ad3f6bc3a8be17580ae7738bca956844" exitCode=0 Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.193572 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhg5r" event={"ID":"866c95e1-566b-4e67-8822-b6c182cb3378","Type":"ContainerDied","Data":"cbf2d14681501515c511963c2ec733c6ad3f6bc3a8be17580ae7738bca956844"} Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.195187 5029 scope.go:117] "RemoveContainer" containerID="fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.238192 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2f9d5d5-9771-4294-961f-110aa2430e29" (UID: "e2f9d5d5-9771-4294-961f-110aa2430e29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.238714 5029 scope.go:117] "RemoveContainer" containerID="8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.259378 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-494x8" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.277677 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-utilities\") pod \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.278133 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-catalog-content\") pod \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.278211 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk6pk\" (UniqueName: \"kubernetes.io/projected/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-kube-api-access-qk6pk\") pod \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\" (UID: \"e33b18fb-9cd7-4c30-bdb0-402734c47cc8\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.278481 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.278505 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f9d5d5-9771-4294-961f-110aa2430e29-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.278518 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66klh\" (UniqueName: \"kubernetes.io/projected/e2f9d5d5-9771-4294-961f-110aa2430e29-kube-api-access-66klh\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.279750 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-utilities" (OuterVolumeSpecName: "utilities") pod "e33b18fb-9cd7-4c30-bdb0-402734c47cc8" (UID: "e33b18fb-9cd7-4c30-bdb0-402734c47cc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.285726 5029 scope.go:117] "RemoveContainer" containerID="8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96" Mar 13 20:34:06 crc kubenswrapper[5029]: E0313 20:34:06.286718 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96\": container with ID starting with 8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96 not found: ID does not exist" containerID="8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.286767 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96"} err="failed to get container status \"8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96\": rpc error: code = NotFound desc = could not find container \"8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96\": container with ID starting with 8be83f8382b2afb7ba5acdcca544cedcd1af3db9158a869bbd70a085776c3e96 not found: ID does not exist" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.286799 5029 scope.go:117] "RemoveContainer" containerID="fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471" Mar 13 20:34:06 crc kubenswrapper[5029]: E0313 20:34:06.287224 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471\": container with ID starting with fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471 not found: ID does not exist" containerID="fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.287245 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471"} err="failed to get container status \"fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471\": rpc error: code = NotFound desc = could not find container \"fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471\": container with ID starting with fd003fa91952bca4cd5d68be3e4bde433871a0ca212d9e25356acb74f5a0a471 not found: ID does not exist" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.287261 5029 scope.go:117] "RemoveContainer" containerID="8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e" Mar 13 20:34:06 crc kubenswrapper[5029]: E0313 20:34:06.287512 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e\": container with ID starting with 8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e not found: ID does not exist" containerID="8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.287533 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e"} err="failed to get container status \"8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e\": rpc error: code = NotFound desc = could not find container \"8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e\": container with ID starting with 8705c2900107d90a98cb40c2cfd56ae7203d316505069dc4fbc6a55845456e0e not found: ID does not exist" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.287548 5029 scope.go:117] "RemoveContainer" containerID="f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.306199 5029 scope.go:117] "RemoveContainer" containerID="dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.306777 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.321480 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.326013 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-kube-api-access-qk6pk" (OuterVolumeSpecName: "kube-api-access-qk6pk") pod "e33b18fb-9cd7-4c30-bdb0-402734c47cc8" (UID: "e33b18fb-9cd7-4c30-bdb0-402734c47cc8"). InnerVolumeSpecName "kube-api-access-qk6pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.333842 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.341947 5029 scope.go:117] "RemoveContainer" containerID="ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.364481 5029 scope.go:117] "RemoveContainer" containerID="f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.365296 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e33b18fb-9cd7-4c30-bdb0-402734c47cc8" (UID: "e33b18fb-9cd7-4c30-bdb0-402734c47cc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: E0313 20:34:06.367267 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5\": container with ID starting with f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5 not found: ID does not exist" containerID="f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.367318 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5"} err="failed to get container status \"f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5\": rpc error: code = NotFound desc = could not find container \"f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5\": container with ID starting with f60496007aaf4c5aa112e8bb74209b602c22d0c0a73bd23fb2f3977e4384a3b5 not found: ID does not exist" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.367364 5029 scope.go:117] "RemoveContainer" containerID="dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.367279 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:34:06 crc kubenswrapper[5029]: E0313 20:34:06.367730 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba\": container with ID starting with dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba not found: ID does not exist" containerID="dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.367757 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba"} err="failed to get container status \"dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba\": rpc error: code = NotFound desc = could not find container \"dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba\": container with ID starting with dff478c22470a36210c8e5d8352f9d52fae4341299d10e109af0ec37a23e0aba not found: ID does not exist" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.367773 5029 scope.go:117] "RemoveContainer" containerID="ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0" Mar 13 20:34:06 crc kubenswrapper[5029]: E0313 20:34:06.369581 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0\": container with ID starting with ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0 not found: ID does not exist" containerID="ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.369634 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0"} err="failed to get container status \"ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0\": rpc error: code = NotFound desc = could not find container \"ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0\": container with ID starting with ff1ae5f3f6ed51965e167d2e487a5baefbb0a16cbf5d0f75e92bc6407d16b1d0 not found: ID does not exist" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.371144 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.379414 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnhl4\" (UniqueName: \"kubernetes.io/projected/3c8fadb2-962e-4bca-8305-a51b8d2334bb-kube-api-access-tnhl4\") pod \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.379501 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-utilities\") pod \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.379571 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-catalog-content\") pod \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\" (UID: \"3c8fadb2-962e-4bca-8305-a51b8d2334bb\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.379876 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.379906 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk6pk\" (UniqueName: \"kubernetes.io/projected/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-kube-api-access-qk6pk\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.379925 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33b18fb-9cd7-4c30-bdb0-402734c47cc8-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.380967 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-utilities" (OuterVolumeSpecName: "utilities") pod "3c8fadb2-962e-4bca-8305-a51b8d2334bb" (UID: "3c8fadb2-962e-4bca-8305-a51b8d2334bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.384959 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8fadb2-962e-4bca-8305-a51b8d2334bb-kube-api-access-tnhl4" (OuterVolumeSpecName: "kube-api-access-tnhl4") pod "3c8fadb2-962e-4bca-8305-a51b8d2334bb" (UID: "3c8fadb2-962e-4bca-8305-a51b8d2334bb"). InnerVolumeSpecName "kube-api-access-tnhl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.442385 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c8fadb2-962e-4bca-8305-a51b8d2334bb" (UID: "3c8fadb2-962e-4bca-8305-a51b8d2334bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480473 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drtfq\" (UniqueName: \"kubernetes.io/projected/8a1ea22d-3be3-412d-be38-ab360aae90e5-kube-api-access-drtfq\") pod \"8a1ea22d-3be3-412d-be38-ab360aae90e5\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480526 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-utilities\") pod \"553bdc43-797f-401f-9ca0-875060ab0553\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480566 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6k4n\" (UniqueName: \"kubernetes.io/projected/866c95e1-566b-4e67-8822-b6c182cb3378-kube-api-access-q6k4n\") pod \"866c95e1-566b-4e67-8822-b6c182cb3378\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480590 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz67n\" (UniqueName: \"kubernetes.io/projected/151390c1-ebb0-49bf-be99-3326fc839781-kube-api-access-gz67n\") pod \"151390c1-ebb0-49bf-be99-3326fc839781\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480610 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xv76\" (UniqueName: \"kubernetes.io/projected/553bdc43-797f-401f-9ca0-875060ab0553-kube-api-access-8xv76\") pod \"553bdc43-797f-401f-9ca0-875060ab0553\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480630 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6cd7\" (UniqueName: \"kubernetes.io/projected/5760820d-9df0-4f3e-b14f-1c64e2607ecd-kube-api-access-q6cd7\") pod \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480661 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-trusted-ca\") pod \"8a1ea22d-3be3-412d-be38-ab360aae90e5\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480681 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-utilities\") pod \"866c95e1-566b-4e67-8822-b6c182cb3378\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480700 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-utilities\") pod \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480739 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-catalog-content\") pod \"866c95e1-566b-4e67-8822-b6c182cb3378\" (UID: \"866c95e1-566b-4e67-8822-b6c182cb3378\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480790 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-utilities\") pod \"151390c1-ebb0-49bf-be99-3326fc839781\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480823 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-catalog-content\") pod \"151390c1-ebb0-49bf-be99-3326fc839781\" (UID: \"151390c1-ebb0-49bf-be99-3326fc839781\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480875 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-catalog-content\") pod \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\" (UID: \"5760820d-9df0-4f3e-b14f-1c64e2607ecd\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480910 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-operator-metrics\") pod \"8a1ea22d-3be3-412d-be38-ab360aae90e5\" (UID: \"8a1ea22d-3be3-412d-be38-ab360aae90e5\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.480933 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-catalog-content\") pod \"553bdc43-797f-401f-9ca0-875060ab0553\" (UID: \"553bdc43-797f-401f-9ca0-875060ab0553\") " Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.481193 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.481209 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8fadb2-962e-4bca-8305-a51b8d2334bb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.481221 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnhl4\" (UniqueName: \"kubernetes.io/projected/3c8fadb2-962e-4bca-8305-a51b8d2334bb-kube-api-access-tnhl4\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.481441 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-utilities" (OuterVolumeSpecName: "utilities") pod "553bdc43-797f-401f-9ca0-875060ab0553" (UID: "553bdc43-797f-401f-9ca0-875060ab0553"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.481662 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-utilities" (OuterVolumeSpecName: "utilities") pod "866c95e1-566b-4e67-8822-b6c182cb3378" (UID: "866c95e1-566b-4e67-8822-b6c182cb3378"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.484144 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-utilities" (OuterVolumeSpecName: "utilities") pod "151390c1-ebb0-49bf-be99-3326fc839781" (UID: "151390c1-ebb0-49bf-be99-3326fc839781"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.486302 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866c95e1-566b-4e67-8822-b6c182cb3378-kube-api-access-q6k4n" (OuterVolumeSpecName: "kube-api-access-q6k4n") pod "866c95e1-566b-4e67-8822-b6c182cb3378" (UID: "866c95e1-566b-4e67-8822-b6c182cb3378"). InnerVolumeSpecName "kube-api-access-q6k4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.487872 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-utilities" (OuterVolumeSpecName: "utilities") pod "5760820d-9df0-4f3e-b14f-1c64e2607ecd" (UID: "5760820d-9df0-4f3e-b14f-1c64e2607ecd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.490229 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1ea22d-3be3-412d-be38-ab360aae90e5-kube-api-access-drtfq" (OuterVolumeSpecName: "kube-api-access-drtfq") pod "8a1ea22d-3be3-412d-be38-ab360aae90e5" (UID: "8a1ea22d-3be3-412d-be38-ab360aae90e5"). InnerVolumeSpecName "kube-api-access-drtfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.490284 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151390c1-ebb0-49bf-be99-3326fc839781-kube-api-access-gz67n" (OuterVolumeSpecName: "kube-api-access-gz67n") pod "151390c1-ebb0-49bf-be99-3326fc839781" (UID: "151390c1-ebb0-49bf-be99-3326fc839781"). InnerVolumeSpecName "kube-api-access-gz67n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.490350 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8a1ea22d-3be3-412d-be38-ab360aae90e5" (UID: "8a1ea22d-3be3-412d-be38-ab360aae90e5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.491193 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8a1ea22d-3be3-412d-be38-ab360aae90e5" (UID: "8a1ea22d-3be3-412d-be38-ab360aae90e5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.491365 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5760820d-9df0-4f3e-b14f-1c64e2607ecd-kube-api-access-q6cd7" (OuterVolumeSpecName: "kube-api-access-q6cd7") pod "5760820d-9df0-4f3e-b14f-1c64e2607ecd" (UID: "5760820d-9df0-4f3e-b14f-1c64e2607ecd"). InnerVolumeSpecName "kube-api-access-q6cd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.493967 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553bdc43-797f-401f-9ca0-875060ab0553-kube-api-access-8xv76" (OuterVolumeSpecName: "kube-api-access-8xv76") pod "553bdc43-797f-401f-9ca0-875060ab0553" (UID: "553bdc43-797f-401f-9ca0-875060ab0553"). InnerVolumeSpecName "kube-api-access-8xv76". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.496036 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kl2lj"] Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.502431 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kl2lj"] Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.537819 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "866c95e1-566b-4e67-8822-b6c182cb3378" (UID: "866c95e1-566b-4e67-8822-b6c182cb3378"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.553014 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz4wv"] Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.556676 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qz4wv"] Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.556715 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "553bdc43-797f-401f-9ca0-875060ab0553" (UID: "553bdc43-797f-401f-9ca0-875060ab0553"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582514 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drtfq\" (UniqueName: \"kubernetes.io/projected/8a1ea22d-3be3-412d-be38-ab360aae90e5-kube-api-access-drtfq\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582560 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582573 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6k4n\" (UniqueName: \"kubernetes.io/projected/866c95e1-566b-4e67-8822-b6c182cb3378-kube-api-access-q6k4n\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582583 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xv76\" (UniqueName: \"kubernetes.io/projected/553bdc43-797f-401f-9ca0-875060ab0553-kube-api-access-8xv76\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582592 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6cd7\" (UniqueName: \"kubernetes.io/projected/5760820d-9df0-4f3e-b14f-1c64e2607ecd-kube-api-access-q6cd7\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582656 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz67n\" (UniqueName: \"kubernetes.io/projected/151390c1-ebb0-49bf-be99-3326fc839781-kube-api-access-gz67n\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582668 5029 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582678 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582686 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582694 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866c95e1-566b-4e67-8822-b6c182cb3378-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582703 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582711 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553bdc43-797f-401f-9ca0-875060ab0553-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.582720 5029 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a1ea22d-3be3-412d-be38-ab360aae90e5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.621332 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" path="/var/lib/kubelet/pods/e2f9d5d5-9771-4294-961f-110aa2430e29/volumes" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.622066 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" path="/var/lib/kubelet/pods/e33b18fb-9cd7-4c30-bdb0-402734c47cc8/volumes" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.675092 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5760820d-9df0-4f3e-b14f-1c64e2607ecd" (UID: "5760820d-9df0-4f3e-b14f-1c64e2607ecd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.684148 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5760820d-9df0-4f3e-b14f-1c64e2607ecd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.703315 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "151390c1-ebb0-49bf-be99-3326fc839781" (UID: "151390c1-ebb0-49bf-be99-3326fc839781"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.786065 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151390c1-ebb0-49bf-be99-3326fc839781-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:06 crc kubenswrapper[5029]: I0313 20:34:06.830403 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg7pb"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.204013 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s58vt" event={"ID":"151390c1-ebb0-49bf-be99-3326fc839781","Type":"ContainerDied","Data":"fdfefcf4e8ed0082d885eb328f04c2183255c33405bb375e88d69ef802f95219"} Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.204087 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s58vt" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.204112 5029 scope.go:117] "RemoveContainer" containerID="8cc01f9bedca0104f539695ae721b0dc61aab86685a7c3bfe37ba10fdfd2ee5c" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.206094 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" event={"ID":"e6a8cc11-fafe-4b5a-a194-61c8680f0585","Type":"ContainerStarted","Data":"189eb55416daf0960c7abef90eef760231fab9e011518bcab592f0b47e411a6b"} Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.206147 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" event={"ID":"e6a8cc11-fafe-4b5a-a194-61c8680f0585","Type":"ContainerStarted","Data":"9f2ab75d7511818cfb29d903fd6079f8905eb3b3b3e26472e208cb0b3c5ec30e"} Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.206311 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.208223 5029 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vg7pb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.208282 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" podUID="e6a8cc11-fafe-4b5a-a194-61c8680f0585" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.210406 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhg5r" event={"ID":"866c95e1-566b-4e67-8822-b6c182cb3378","Type":"ContainerDied","Data":"e901e01d1d1320324adcb84fb917a20e539d42dc05e04a0a78f56524948d179b"} Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.210559 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhg5r" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.215962 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpzl2" event={"ID":"5760820d-9df0-4f3e-b14f-1c64e2607ecd","Type":"ContainerDied","Data":"a1f42f1d7c167719aad66a7344fc38af406da12549f4c7946033c3de21439189"} Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.216054 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpzl2" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.222807 5029 scope.go:117] "RemoveContainer" containerID="d5c8277dafd0da5519b017399b95f07199f91bbd7178b4a3da16fa3f887d4f41" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.225217 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-494x8" event={"ID":"3c8fadb2-962e-4bca-8305-a51b8d2334bb","Type":"ContainerDied","Data":"6c9c4c6fbf86c7fa3f5ff87aca6fcc111251a88b6b2b0e8dc6dbfe0513eb1cac"} Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.225360 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-494x8" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.232598 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.232620 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zb64j" event={"ID":"8a1ea22d-3be3-412d-be38-ab360aae90e5","Type":"ContainerDied","Data":"1b859d691e2541cead23e858d082af7ff17f780bcfb0300cf7f19db2adff416a"} Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.232777 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" podStartSLOduration=2.232751206 podStartE2EDuration="2.232751206s" podCreationTimestamp="2026-03-13 20:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:34:07.228822039 +0000 UTC m=+407.244904442" watchObservedRunningTime="2026-03-13 20:34:07.232751206 +0000 UTC m=+407.248833609" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.238547 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xlnz" event={"ID":"553bdc43-797f-401f-9ca0-875060ab0553","Type":"ContainerDied","Data":"e46353742625e1e73694b5009cc17df6a74761763432c40ce5fe22e60b45a6e8"} Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.238723 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xlnz" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.258810 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhg5r"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.259839 5029 scope.go:117] "RemoveContainer" containerID="f8d961af7a23f171fbcc1552334746a11ef342ef96c2a5b3c4484266afa2f444" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.263063 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhg5r"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.276310 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s58vt"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.280608 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s58vt"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.284317 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zb64j"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.291504 5029 scope.go:117] "RemoveContainer" containerID="cbf2d14681501515c511963c2ec733c6ad3f6bc3a8be17580ae7738bca956844" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.294639 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zb64j"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.310806 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-494x8"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.328146 5029 scope.go:117] "RemoveContainer" containerID="8fcef2c32b40494bf2fdaa8be6712e5f4df0a931a4d74917a0479683da6c9cf2" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.331172 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-494x8"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.336180 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xlnz"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.340528 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xlnz"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.343930 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpzl2"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.348406 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vpzl2"] Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.348790 5029 scope.go:117] "RemoveContainer" containerID="54090f592321456ce898dfcd173ae4d1baed9e0fb40af03eba9c6d37b429956a" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.364036 5029 scope.go:117] "RemoveContainer" containerID="ea431d9d073770b052681e5acfa214dc6ca5c51dc6e4ecff60dfab60fd2f9387" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.379295 5029 scope.go:117] "RemoveContainer" containerID="78e06fe6c6a0a3994a216ec86b6bc8a85a6111a25313eeafd09bc189f190bd54" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.398592 5029 scope.go:117] "RemoveContainer" containerID="e3d8f7abe397d5193498548aaeb0902ff939ac0331242f65adf27098bb856bd3" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.420289 5029 scope.go:117] "RemoveContainer" containerID="d1842e0b2d093158ee852e0b3bc2ec06d11a44ae408f34bb65470916033ed1e4" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.443225 5029 scope.go:117] "RemoveContainer" containerID="c20eec6fc26eb49f3dd544a9135c08ed7e30e303e843e374256b157462d81f7d" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.460133 5029 scope.go:117] "RemoveContainer" containerID="d00e6288411ba217f63e2769ebd3036dc0b83ee3f28b33f2fe739e9096f53586" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.478839 5029 scope.go:117] "RemoveContainer" containerID="ad8dab26cfeba72cdee42cdffe8732ceb3ebfa648dc0198963af9da8b5eff610" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.502769 5029 scope.go:117] "RemoveContainer" containerID="29f97a6ac0965e8116f60cf3e39b0f1d0c9e462aaf94fce947d5c7877b0ead80" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.523736 5029 scope.go:117] "RemoveContainer" containerID="c3d369815cd4841e112c5bc77cd26a96679c471fd2681199105e78449d4a689d" Mar 13 20:34:07 crc kubenswrapper[5029]: I0313 20:34:07.568711 5029 scope.go:117] "RemoveContainer" containerID="d972d6e6b3bba28456fe83c77807be5be5bf83bbc79c0f35da697d66a40b1d39" Mar 13 20:34:08 crc kubenswrapper[5029]: I0313 20:34:08.266498 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vg7pb" Mar 13 20:34:08 crc kubenswrapper[5029]: I0313 20:34:08.607907 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151390c1-ebb0-49bf-be99-3326fc839781" path="/var/lib/kubelet/pods/151390c1-ebb0-49bf-be99-3326fc839781/volumes" Mar 13 20:34:08 crc kubenswrapper[5029]: I0313 20:34:08.609090 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" path="/var/lib/kubelet/pods/3c8fadb2-962e-4bca-8305-a51b8d2334bb/volumes" Mar 13 20:34:08 crc kubenswrapper[5029]: I0313 20:34:08.610043 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553bdc43-797f-401f-9ca0-875060ab0553" path="/var/lib/kubelet/pods/553bdc43-797f-401f-9ca0-875060ab0553/volumes" Mar 13 20:34:08 crc kubenswrapper[5029]: I0313 20:34:08.611609 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" path="/var/lib/kubelet/pods/5760820d-9df0-4f3e-b14f-1c64e2607ecd/volumes" Mar 13 20:34:08 crc kubenswrapper[5029]: I0313 20:34:08.612429 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" path="/var/lib/kubelet/pods/866c95e1-566b-4e67-8822-b6c182cb3378/volumes" Mar 13 20:34:08 crc kubenswrapper[5029]: I0313 20:34:08.613768 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1ea22d-3be3-412d-be38-ab360aae90e5" path="/var/lib/kubelet/pods/8a1ea22d-3be3-412d-be38-ab360aae90e5/volumes" Mar 13 20:34:11 crc kubenswrapper[5029]: I0313 20:34:11.772049 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv"] Mar 13 20:34:11 crc kubenswrapper[5029]: I0313 20:34:11.772642 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" podUID="41f301ac-a1da-46a4-8314-6b1fcfe865a1" containerName="controller-manager" containerID="cri-o://5723f025091b10062122c4e6a998b1cbd30363f9ad993a66182b11851323dd75" gracePeriod=30 Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.288187 5029 generic.go:334] "Generic (PLEG): container finished" podID="41f301ac-a1da-46a4-8314-6b1fcfe865a1" containerID="5723f025091b10062122c4e6a998b1cbd30363f9ad993a66182b11851323dd75" exitCode=0 Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.288297 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" event={"ID":"41f301ac-a1da-46a4-8314-6b1fcfe865a1","Type":"ContainerDied","Data":"5723f025091b10062122c4e6a998b1cbd30363f9ad993a66182b11851323dd75"} Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.324828 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.461842 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-config\") pod \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.461909 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-client-ca\") pod \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.461933 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f301ac-a1da-46a4-8314-6b1fcfe865a1-serving-cert\") pod \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.461959 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vxbl\" (UniqueName: \"kubernetes.io/projected/41f301ac-a1da-46a4-8314-6b1fcfe865a1-kube-api-access-6vxbl\") pod \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.462026 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-proxy-ca-bundles\") pod \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\" (UID: \"41f301ac-a1da-46a4-8314-6b1fcfe865a1\") " Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.462915 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "41f301ac-a1da-46a4-8314-6b1fcfe865a1" (UID: "41f301ac-a1da-46a4-8314-6b1fcfe865a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.463343 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "41f301ac-a1da-46a4-8314-6b1fcfe865a1" (UID: "41f301ac-a1da-46a4-8314-6b1fcfe865a1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.463821 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-config" (OuterVolumeSpecName: "config") pod "41f301ac-a1da-46a4-8314-6b1fcfe865a1" (UID: "41f301ac-a1da-46a4-8314-6b1fcfe865a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.468589 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f301ac-a1da-46a4-8314-6b1fcfe865a1-kube-api-access-6vxbl" (OuterVolumeSpecName: "kube-api-access-6vxbl") pod "41f301ac-a1da-46a4-8314-6b1fcfe865a1" (UID: "41f301ac-a1da-46a4-8314-6b1fcfe865a1"). InnerVolumeSpecName "kube-api-access-6vxbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.468998 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f301ac-a1da-46a4-8314-6b1fcfe865a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41f301ac-a1da-46a4-8314-6b1fcfe865a1" (UID: "41f301ac-a1da-46a4-8314-6b1fcfe865a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.563629 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.563681 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.563696 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f301ac-a1da-46a4-8314-6b1fcfe865a1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.563710 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vxbl\" (UniqueName: \"kubernetes.io/projected/41f301ac-a1da-46a4-8314-6b1fcfe865a1-kube-api-access-6vxbl\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.563726 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f301ac-a1da-46a4-8314-6b1fcfe865a1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.828667 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bbbd65785-q2c2p"] Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.828891 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553bdc43-797f-401f-9ca0-875060ab0553" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.828904 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="553bdc43-797f-401f-9ca0-875060ab0553" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.828914 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.828920 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.828930 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.828936 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.828947 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.828953 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.828961 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.828966 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.828973 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.828979 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.828986 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1ea22d-3be3-412d-be38-ab360aae90e5" containerName="marketplace-operator" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.828992 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1ea22d-3be3-412d-be38-ab360aae90e5" containerName="marketplace-operator" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.828999 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151390c1-ebb0-49bf-be99-3326fc839781" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829007 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="151390c1-ebb0-49bf-be99-3326fc839781" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829015 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829020 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829026 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151390c1-ebb0-49bf-be99-3326fc839781" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829032 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="151390c1-ebb0-49bf-be99-3326fc839781" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829037 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829042 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829051 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829056 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829065 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553bdc43-797f-401f-9ca0-875060ab0553" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829071 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="553bdc43-797f-401f-9ca0-875060ab0553" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829080 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829086 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829095 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553bdc43-797f-401f-9ca0-875060ab0553" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829101 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="553bdc43-797f-401f-9ca0-875060ab0553" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829109 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f301ac-a1da-46a4-8314-6b1fcfe865a1" containerName="controller-manager" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829117 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f301ac-a1da-46a4-8314-6b1fcfe865a1" containerName="controller-manager" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829125 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829130 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829138 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829145 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829154 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151390c1-ebb0-49bf-be99-3326fc839781" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829160 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="151390c1-ebb0-49bf-be99-3326fc839781" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829166 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829171 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829181 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829186 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829193 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829200 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerName="extract-utilities" Mar 13 20:34:12 crc kubenswrapper[5029]: E0313 20:34:12.829208 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829214 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" containerName="extract-content" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829296 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f9d5d5-9771-4294-961f-110aa2430e29" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829306 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f301ac-a1da-46a4-8314-6b1fcfe865a1" containerName="controller-manager" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829314 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="553bdc43-797f-401f-9ca0-875060ab0553" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829324 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="866c95e1-566b-4e67-8822-b6c182cb3378" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829332 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8fadb2-962e-4bca-8305-a51b8d2334bb" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829339 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="5760820d-9df0-4f3e-b14f-1c64e2607ecd" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829347 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="151390c1-ebb0-49bf-be99-3326fc839781" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829358 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e33b18fb-9cd7-4c30-bdb0-402734c47cc8" containerName="registry-server" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829370 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1ea22d-3be3-412d-be38-ab360aae90e5" containerName="marketplace-operator" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.829965 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.853348 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bbbd65785-q2c2p"] Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.974775 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-client-ca\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.975202 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrcjw\" (UniqueName: \"kubernetes.io/projected/630bc9b5-16f3-435b-bbb0-35d079cd837f-kube-api-access-lrcjw\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.975340 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-proxy-ca-bundles\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.975544 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/630bc9b5-16f3-435b-bbb0-35d079cd837f-serving-cert\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:12 crc kubenswrapper[5029]: I0313 20:34:12.975669 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-config\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.076749 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrcjw\" (UniqueName: \"kubernetes.io/projected/630bc9b5-16f3-435b-bbb0-35d079cd837f-kube-api-access-lrcjw\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.076903 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-proxy-ca-bundles\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.076943 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/630bc9b5-16f3-435b-bbb0-35d079cd837f-serving-cert\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.076984 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-config\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.077013 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-client-ca\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.078842 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-client-ca\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.078969 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-config\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.079746 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-proxy-ca-bundles\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.083726 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/630bc9b5-16f3-435b-bbb0-35d079cd837f-serving-cert\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.101154 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrcjw\" (UniqueName: \"kubernetes.io/projected/630bc9b5-16f3-435b-bbb0-35d079cd837f-kube-api-access-lrcjw\") pod \"controller-manager-7bbbd65785-q2c2p\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.144131 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.297244 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" event={"ID":"41f301ac-a1da-46a4-8314-6b1fcfe865a1","Type":"ContainerDied","Data":"30f9d3b0d563ab9bb21625968f7f4d90979e0f5bd215a1324afe967290a5f313"} Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.297346 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.297705 5029 scope.go:117] "RemoveContainer" containerID="5723f025091b10062122c4e6a998b1cbd30363f9ad993a66182b11851323dd75" Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.321469 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv"] Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.326409 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-9m6dv"] Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.556837 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bbbd65785-q2c2p"] Mar 13 20:34:13 crc kubenswrapper[5029]: I0313 20:34:13.626264 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl427"] Mar 13 20:34:14 crc kubenswrapper[5029]: I0313 20:34:14.305556 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" event={"ID":"630bc9b5-16f3-435b-bbb0-35d079cd837f","Type":"ContainerStarted","Data":"19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a"} Mar 13 20:34:14 crc kubenswrapper[5029]: I0313 20:34:14.305978 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" event={"ID":"630bc9b5-16f3-435b-bbb0-35d079cd837f","Type":"ContainerStarted","Data":"2df7c731fadc521d0da566561d137fb9f17e14cf402e3350c9a1ee84c89e5df8"} Mar 13 20:34:14 crc kubenswrapper[5029]: I0313 20:34:14.306003 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:14 crc kubenswrapper[5029]: I0313 20:34:14.311943 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:14 crc kubenswrapper[5029]: I0313 20:34:14.321942 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" podStartSLOduration=3.321920973 podStartE2EDuration="3.321920973s" podCreationTimestamp="2026-03-13 20:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:34:14.320948675 +0000 UTC m=+414.337031078" watchObservedRunningTime="2026-03-13 20:34:14.321920973 +0000 UTC m=+414.338003396" Mar 13 20:34:14 crc kubenswrapper[5029]: I0313 20:34:14.605892 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f301ac-a1da-46a4-8314-6b1fcfe865a1" path="/var/lib/kubelet/pods/41f301ac-a1da-46a4-8314-6b1fcfe865a1/volumes" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.845064 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4tpjt"] Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.847096 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.849353 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.857192 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4tpjt"] Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.864797 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-utilities\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.864882 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-catalog-content\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.864955 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwdfg\" (UniqueName: \"kubernetes.io/projected/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-kube-api-access-gwdfg\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.966083 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwdfg\" (UniqueName: \"kubernetes.io/projected/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-kube-api-access-gwdfg\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.966525 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-utilities\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.966586 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-catalog-content\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.967054 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-catalog-content\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.967207 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-utilities\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:32 crc kubenswrapper[5029]: I0313 20:34:32.986023 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwdfg\" (UniqueName: \"kubernetes.io/projected/ea119203-d4b1-426b-aa6e-4b49cb01f3a7-kube-api-access-gwdfg\") pod \"redhat-operators-4tpjt\" (UID: \"ea119203-d4b1-426b-aa6e-4b49cb01f3a7\") " pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.046983 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xccj6"] Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.047982 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.050253 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.066810 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xccj6"] Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.067110 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dm2\" (UniqueName: \"kubernetes.io/projected/f796f770-1b16-4b7a-a52a-bdebed36f36b-kube-api-access-s4dm2\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.067172 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f796f770-1b16-4b7a-a52a-bdebed36f36b-catalog-content\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.067207 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f796f770-1b16-4b7a-a52a-bdebed36f36b-utilities\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.162407 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.167933 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dm2\" (UniqueName: \"kubernetes.io/projected/f796f770-1b16-4b7a-a52a-bdebed36f36b-kube-api-access-s4dm2\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.167987 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f796f770-1b16-4b7a-a52a-bdebed36f36b-catalog-content\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.168025 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f796f770-1b16-4b7a-a52a-bdebed36f36b-utilities\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.168560 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f796f770-1b16-4b7a-a52a-bdebed36f36b-utilities\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.169201 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f796f770-1b16-4b7a-a52a-bdebed36f36b-catalog-content\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.185403 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dm2\" (UniqueName: \"kubernetes.io/projected/f796f770-1b16-4b7a-a52a-bdebed36f36b-kube-api-access-s4dm2\") pod \"redhat-marketplace-xccj6\" (UID: \"f796f770-1b16-4b7a-a52a-bdebed36f36b\") " pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.364491 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.553776 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4tpjt"] Mar 13 20:34:33 crc kubenswrapper[5029]: I0313 20:34:33.776198 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xccj6"] Mar 13 20:34:33 crc kubenswrapper[5029]: W0313 20:34:33.816663 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf796f770_1b16_4b7a_a52a_bdebed36f36b.slice/crio-97b60e6f2a7f8d89835f3017878a8c1830facde30ca28d3b0780bc7ae0ae236d WatchSource:0}: Error finding container 97b60e6f2a7f8d89835f3017878a8c1830facde30ca28d3b0780bc7ae0ae236d: Status 404 returned error can't find the container with id 97b60e6f2a7f8d89835f3017878a8c1830facde30ca28d3b0780bc7ae0ae236d Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.412180 5029 generic.go:334] "Generic (PLEG): container finished" podID="f796f770-1b16-4b7a-a52a-bdebed36f36b" containerID="2df7e8fbf6c5d2dd5fcc53cce95abb4e29ae0682680c7e42528776ec2c9d8993" exitCode=0 Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.412268 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccj6" event={"ID":"f796f770-1b16-4b7a-a52a-bdebed36f36b","Type":"ContainerDied","Data":"2df7e8fbf6c5d2dd5fcc53cce95abb4e29ae0682680c7e42528776ec2c9d8993"} Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.412834 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccj6" event={"ID":"f796f770-1b16-4b7a-a52a-bdebed36f36b","Type":"ContainerStarted","Data":"97b60e6f2a7f8d89835f3017878a8c1830facde30ca28d3b0780bc7ae0ae236d"} Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.414903 5029 generic.go:334] "Generic (PLEG): container finished" podID="ea119203-d4b1-426b-aa6e-4b49cb01f3a7" containerID="c31f1387f67dd32845a67d60606b8545ed1fb1e18923999d348a2bafea4dfe6a" exitCode=0 Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.414959 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tpjt" event={"ID":"ea119203-d4b1-426b-aa6e-4b49cb01f3a7","Type":"ContainerDied","Data":"c31f1387f67dd32845a67d60606b8545ed1fb1e18923999d348a2bafea4dfe6a"} Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.414992 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tpjt" event={"ID":"ea119203-d4b1-426b-aa6e-4b49cb01f3a7","Type":"ContainerStarted","Data":"7c6eb0acf788058bf0d0ad389a863b9bab9eaf87b8e3ddd2bf22019fbf69426e"} Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.650607 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4ssbt"] Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.671775 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4ssbt"] Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.671953 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.674311 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.787992 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-utilities\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.788143 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tkfx\" (UniqueName: \"kubernetes.io/projected/d5874fee-3658-412b-95c2-0cbdf9da9799-kube-api-access-7tkfx\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.788326 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-catalog-content\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.889985 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-catalog-content\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.890087 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-utilities\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.890114 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tkfx\" (UniqueName: \"kubernetes.io/projected/d5874fee-3658-412b-95c2-0cbdf9da9799-kube-api-access-7tkfx\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.890609 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-catalog-content\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.890843 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-utilities\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.912409 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tkfx\" (UniqueName: \"kubernetes.io/projected/d5874fee-3658-412b-95c2-0cbdf9da9799-kube-api-access-7tkfx\") pod \"community-operators-4ssbt\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:34 crc kubenswrapper[5029]: I0313 20:34:34.996619 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.401363 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4ssbt"] Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.421074 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccj6" event={"ID":"f796f770-1b16-4b7a-a52a-bdebed36f36b","Type":"ContainerStarted","Data":"c9926cd9c3252dfcd9451e1f579568ab1f2d5e8f6493c8cbc0029dc61e0ce168"} Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.422737 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ssbt" event={"ID":"d5874fee-3658-412b-95c2-0cbdf9da9799","Type":"ContainerStarted","Data":"17d5fb7d0b77f6a885d7a8643b18e63ba6f5084adbf327fe3ec33b7de06e6bfc"} Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.651871 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6bpnw"] Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.653154 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.655701 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.660236 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bpnw"] Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.803675 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbtc\" (UniqueName: \"kubernetes.io/projected/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-kube-api-access-tbbtc\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.804002 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-catalog-content\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.804032 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-utilities\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.904773 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-catalog-content\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.904828 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-utilities\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.904891 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbtc\" (UniqueName: \"kubernetes.io/projected/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-kube-api-access-tbbtc\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.905375 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-utilities\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.905376 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-catalog-content\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:35 crc kubenswrapper[5029]: I0313 20:34:35.922890 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbtc\" (UniqueName: \"kubernetes.io/projected/15fd0736-9d55-436e-ac0d-de5e11d0a0b4-kube-api-access-tbbtc\") pod \"certified-operators-6bpnw\" (UID: \"15fd0736-9d55-436e-ac0d-de5e11d0a0b4\") " pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:36 crc kubenswrapper[5029]: I0313 20:34:36.015648 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:36 crc kubenswrapper[5029]: I0313 20:34:36.429616 5029 generic.go:334] "Generic (PLEG): container finished" podID="ea119203-d4b1-426b-aa6e-4b49cb01f3a7" containerID="30791e6e9ceadd77b0d45c9be84ac4b8261a5334bfd2b8fc145a152bcb2da076" exitCode=0 Mar 13 20:34:36 crc kubenswrapper[5029]: I0313 20:34:36.430025 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tpjt" event={"ID":"ea119203-d4b1-426b-aa6e-4b49cb01f3a7","Type":"ContainerDied","Data":"30791e6e9ceadd77b0d45c9be84ac4b8261a5334bfd2b8fc145a152bcb2da076"} Mar 13 20:34:36 crc kubenswrapper[5029]: I0313 20:34:36.432292 5029 generic.go:334] "Generic (PLEG): container finished" podID="f796f770-1b16-4b7a-a52a-bdebed36f36b" containerID="c9926cd9c3252dfcd9451e1f579568ab1f2d5e8f6493c8cbc0029dc61e0ce168" exitCode=0 Mar 13 20:34:36 crc kubenswrapper[5029]: I0313 20:34:36.432361 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccj6" event={"ID":"f796f770-1b16-4b7a-a52a-bdebed36f36b","Type":"ContainerDied","Data":"c9926cd9c3252dfcd9451e1f579568ab1f2d5e8f6493c8cbc0029dc61e0ce168"} Mar 13 20:34:36 crc kubenswrapper[5029]: I0313 20:34:36.438311 5029 generic.go:334] "Generic (PLEG): container finished" podID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerID="0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b" exitCode=0 Mar 13 20:34:36 crc kubenswrapper[5029]: I0313 20:34:36.438376 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ssbt" event={"ID":"d5874fee-3658-412b-95c2-0cbdf9da9799","Type":"ContainerDied","Data":"0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b"} Mar 13 20:34:36 crc kubenswrapper[5029]: I0313 20:34:36.440648 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bpnw"] Mar 13 20:34:37 crc kubenswrapper[5029]: I0313 20:34:37.449145 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xccj6" event={"ID":"f796f770-1b16-4b7a-a52a-bdebed36f36b","Type":"ContainerStarted","Data":"4612dc4ce1ee1ac1cfa0bebddce1df1c5998df8e4166d316b81e1a8ee0b79ae4"} Mar 13 20:34:37 crc kubenswrapper[5029]: I0313 20:34:37.450626 5029 generic.go:334] "Generic (PLEG): container finished" podID="15fd0736-9d55-436e-ac0d-de5e11d0a0b4" containerID="c904be4d777c2a12d02878a11eda20ba618ef945db3bf20f3316dcff109eedb5" exitCode=0 Mar 13 20:34:37 crc kubenswrapper[5029]: I0313 20:34:37.450679 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpnw" event={"ID":"15fd0736-9d55-436e-ac0d-de5e11d0a0b4","Type":"ContainerDied","Data":"c904be4d777c2a12d02878a11eda20ba618ef945db3bf20f3316dcff109eedb5"} Mar 13 20:34:37 crc kubenswrapper[5029]: I0313 20:34:37.450696 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpnw" event={"ID":"15fd0736-9d55-436e-ac0d-de5e11d0a0b4","Type":"ContainerStarted","Data":"9f05b59ddbd4afd998de5ba8663efa2651a97956d88046faa4ed26f5915809f3"} Mar 13 20:34:37 crc kubenswrapper[5029]: I0313 20:34:37.457267 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tpjt" event={"ID":"ea119203-d4b1-426b-aa6e-4b49cb01f3a7","Type":"ContainerStarted","Data":"7042434b60f0864e1c1407f7b12973c7acb55492bd77c863e29595fa810a9507"} Mar 13 20:34:37 crc kubenswrapper[5029]: I0313 20:34:37.468068 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xccj6" podStartSLOduration=1.83009538 podStartE2EDuration="4.468045921s" podCreationTimestamp="2026-03-13 20:34:33 +0000 UTC" firstStartedPulling="2026-03-13 20:34:34.414256176 +0000 UTC m=+434.430338579" lastFinishedPulling="2026-03-13 20:34:37.052206707 +0000 UTC m=+437.068289120" observedRunningTime="2026-03-13 20:34:37.465683387 +0000 UTC m=+437.481765820" watchObservedRunningTime="2026-03-13 20:34:37.468045921 +0000 UTC m=+437.484128324" Mar 13 20:34:37 crc kubenswrapper[5029]: I0313 20:34:37.503565 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4tpjt" podStartSLOduration=3.052817501 podStartE2EDuration="5.503547219s" podCreationTimestamp="2026-03-13 20:34:32 +0000 UTC" firstStartedPulling="2026-03-13 20:34:34.418596294 +0000 UTC m=+434.434678687" lastFinishedPulling="2026-03-13 20:34:36.869326002 +0000 UTC m=+436.885408405" observedRunningTime="2026-03-13 20:34:37.501935935 +0000 UTC m=+437.518018348" watchObservedRunningTime="2026-03-13 20:34:37.503547219 +0000 UTC m=+437.519629622" Mar 13 20:34:38 crc kubenswrapper[5029]: I0313 20:34:38.651939 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" podUID="e9f4273c-6ab2-48dd-af0c-f6f03b91d037" containerName="oauth-openshift" containerID="cri-o://0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce" gracePeriod=15 Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.181324 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.210709 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh"] Mar 13 20:34:39 crc kubenswrapper[5029]: E0313 20:34:39.210954 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f4273c-6ab2-48dd-af0c-f6f03b91d037" containerName="oauth-openshift" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.210967 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f4273c-6ab2-48dd-af0c-f6f03b91d037" containerName="oauth-openshift" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.211058 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f4273c-6ab2-48dd-af0c-f6f03b91d037" containerName="oauth-openshift" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.211425 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.231934 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh"] Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.349973 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-session\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.350065 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-login\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.350106 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-serving-cert\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.350145 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-service-ca\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.350192 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-idp-0-file-data\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.350221 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-router-certs\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.350281 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-trusted-ca-bundle\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.350929 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.350940 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351106 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-ocp-branding-template\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351155 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96vqt\" (UniqueName: \"kubernetes.io/projected/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-kube-api-access-96vqt\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351179 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-cliconfig\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351206 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-dir\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351232 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-policies\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351259 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-provider-selection\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351324 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-error\") pod \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\" (UID: \"e9f4273c-6ab2-48dd-af0c-f6f03b91d037\") " Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351528 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351558 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-audit-policies\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351580 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9c7dd8b-7755-4097-8085-29d2b4e965d9-audit-dir\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351605 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351636 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351674 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351707 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351731 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351761 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-error\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351790 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbxt\" (UniqueName: \"kubernetes.io/projected/a9c7dd8b-7755-4097-8085-29d2b4e965d9-kube-api-access-7wbxt\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351819 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-login\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351880 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-session\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351914 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351937 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351981 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.351995 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.353237 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.353376 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.357349 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.362249 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.369157 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-kube-api-access-96vqt" (OuterVolumeSpecName: "kube-api-access-96vqt") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "kube-api-access-96vqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.373141 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.373494 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.374193 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.377218 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.377480 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.377791 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.377834 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e9f4273c-6ab2-48dd-af0c-f6f03b91d037" (UID: "e9f4273c-6ab2-48dd-af0c-f6f03b91d037"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452704 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-session\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452753 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452773 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452793 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452814 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-audit-policies\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452831 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9c7dd8b-7755-4097-8085-29d2b4e965d9-audit-dir\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452875 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452903 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452929 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452950 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452968 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.452988 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-error\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453013 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wbxt\" (UniqueName: \"kubernetes.io/projected/a9c7dd8b-7755-4097-8085-29d2b4e965d9-kube-api-access-7wbxt\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453037 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-login\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453080 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453092 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96vqt\" (UniqueName: \"kubernetes.io/projected/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-kube-api-access-96vqt\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453101 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453110 5029 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453119 5029 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453129 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453140 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453154 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453167 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453180 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453192 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453204 5029 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9f4273c-6ab2-48dd-af0c-f6f03b91d037-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.453906 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9c7dd8b-7755-4097-8085-29d2b4e965d9-audit-dir\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.454052 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-audit-policies\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.454506 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.454714 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.455220 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.456658 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.458120 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.461317 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.461344 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-error\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.461350 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.461622 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-user-template-login\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.461638 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-session\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.461959 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c7dd8b-7755-4097-8085-29d2b4e965d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.471814 5029 generic.go:334] "Generic (PLEG): container finished" podID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerID="2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc" exitCode=0 Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.471906 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ssbt" event={"ID":"d5874fee-3658-412b-95c2-0cbdf9da9799","Type":"ContainerDied","Data":"2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc"} Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.474324 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wbxt\" (UniqueName: \"kubernetes.io/projected/a9c7dd8b-7755-4097-8085-29d2b4e965d9-kube-api-access-7wbxt\") pod \"oauth-openshift-fc8b9c7b8-f6qhh\" (UID: \"a9c7dd8b-7755-4097-8085-29d2b4e965d9\") " pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.475658 5029 generic.go:334] "Generic (PLEG): container finished" podID="e9f4273c-6ab2-48dd-af0c-f6f03b91d037" containerID="0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce" exitCode=0 Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.475891 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.476012 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" event={"ID":"e9f4273c-6ab2-48dd-af0c-f6f03b91d037","Type":"ContainerDied","Data":"0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce"} Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.476058 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sl427" event={"ID":"e9f4273c-6ab2-48dd-af0c-f6f03b91d037","Type":"ContainerDied","Data":"57e8089bf2e476fc3a6832eed1ef34966aadd08d0fe107465e5caa7ae4d10c4a"} Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.476086 5029 scope.go:117] "RemoveContainer" containerID="0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.479366 5029 generic.go:334] "Generic (PLEG): container finished" podID="15fd0736-9d55-436e-ac0d-de5e11d0a0b4" containerID="1c96bb07ca36d26b704ce2abd4236f090f50112e534aa7b90b3bfe55bcd74772" exitCode=0 Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.479406 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpnw" event={"ID":"15fd0736-9d55-436e-ac0d-de5e11d0a0b4","Type":"ContainerDied","Data":"1c96bb07ca36d26b704ce2abd4236f090f50112e534aa7b90b3bfe55bcd74772"} Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.524833 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.525177 5029 scope.go:117] "RemoveContainer" containerID="0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce" Mar 13 20:34:39 crc kubenswrapper[5029]: E0313 20:34:39.526345 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce\": container with ID starting with 0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce not found: ID does not exist" containerID="0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.526609 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce"} err="failed to get container status \"0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce\": rpc error: code = NotFound desc = could not find container \"0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce\": container with ID starting with 0b664449e524ce8b406d678006ac3188016bee20bf85c95585e7eba9e995a5ce not found: ID does not exist" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.536758 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl427"] Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.540677 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl427"] Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.786589 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-85qtk"] Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.787369 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.807419 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-85qtk"] Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.938897 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh"] Mar 13 20:34:39 crc kubenswrapper[5029]: W0313 20:34:39.945636 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9c7dd8b_7755_4097_8085_29d2b4e965d9.slice/crio-bb09d79fbee93bdcd998092b12bae6439d01af7634dd277fa11a883e51b64d37 WatchSource:0}: Error finding container bb09d79fbee93bdcd998092b12bae6439d01af7634dd277fa11a883e51b64d37: Status 404 returned error can't find the container with id bb09d79fbee93bdcd998092b12bae6439d01af7634dd277fa11a883e51b64d37 Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.963588 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5d3a430-d51f-4833-bb68-08772c2daafa-registry-certificates\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.963656 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5d3a430-d51f-4833-bb68-08772c2daafa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.963693 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d3a430-d51f-4833-bb68-08772c2daafa-trusted-ca\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.963793 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-registry-tls\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.963829 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.963888 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsjpf\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-kube-api-access-fsjpf\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.963914 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5d3a430-d51f-4833-bb68-08772c2daafa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:39 crc kubenswrapper[5029]: I0313 20:34:39.963939 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-bound-sa-token\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.001525 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.065234 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsjpf\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-kube-api-access-fsjpf\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.065300 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5d3a430-d51f-4833-bb68-08772c2daafa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.065334 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-bound-sa-token\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.065361 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5d3a430-d51f-4833-bb68-08772c2daafa-registry-certificates\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.065393 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5d3a430-d51f-4833-bb68-08772c2daafa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.065424 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d3a430-d51f-4833-bb68-08772c2daafa-trusted-ca\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.065489 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-registry-tls\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.067207 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d3a430-d51f-4833-bb68-08772c2daafa-trusted-ca\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.067599 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5d3a430-d51f-4833-bb68-08772c2daafa-registry-certificates\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.067992 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5d3a430-d51f-4833-bb68-08772c2daafa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.072842 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-registry-tls\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.073562 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5d3a430-d51f-4833-bb68-08772c2daafa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.086715 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsjpf\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-kube-api-access-fsjpf\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.088242 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5d3a430-d51f-4833-bb68-08772c2daafa-bound-sa-token\") pod \"image-registry-66df7c8f76-85qtk\" (UID: \"d5d3a430-d51f-4833-bb68-08772c2daafa\") " pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.109132 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.487523 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" event={"ID":"a9c7dd8b-7755-4097-8085-29d2b4e965d9","Type":"ContainerStarted","Data":"49c0746da5247da37d4efdca1e91de538280c09dda1f5a58aa8e8272ed6b46a7"} Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.488267 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" event={"ID":"a9c7dd8b-7755-4097-8085-29d2b4e965d9","Type":"ContainerStarted","Data":"bb09d79fbee93bdcd998092b12bae6439d01af7634dd277fa11a883e51b64d37"} Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.488291 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.491053 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bpnw" event={"ID":"15fd0736-9d55-436e-ac0d-de5e11d0a0b4","Type":"ContainerStarted","Data":"83e5076dad140dcc8972af2ef914f0464049187022952142b0dd46521e5c880a"} Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.493199 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ssbt" event={"ID":"d5874fee-3658-412b-95c2-0cbdf9da9799","Type":"ContainerStarted","Data":"f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23"} Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.515388 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" podStartSLOduration=27.515366471 podStartE2EDuration="27.515366471s" podCreationTimestamp="2026-03-13 20:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:34:40.507878767 +0000 UTC m=+440.523961180" watchObservedRunningTime="2026-03-13 20:34:40.515366471 +0000 UTC m=+440.531448874" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.534645 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4ssbt" podStartSLOduration=2.8671605529999997 podStartE2EDuration="6.534620356s" podCreationTimestamp="2026-03-13 20:34:34 +0000 UTC" firstStartedPulling="2026-03-13 20:34:36.440562946 +0000 UTC m=+436.456645359" lastFinishedPulling="2026-03-13 20:34:40.108022769 +0000 UTC m=+440.124105162" observedRunningTime="2026-03-13 20:34:40.532222861 +0000 UTC m=+440.548305264" watchObservedRunningTime="2026-03-13 20:34:40.534620356 +0000 UTC m=+440.550702759" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.550161 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6bpnw" podStartSLOduration=2.957672356 podStartE2EDuration="5.550138368s" podCreationTimestamp="2026-03-13 20:34:35 +0000 UTC" firstStartedPulling="2026-03-13 20:34:37.452101917 +0000 UTC m=+437.468184320" lastFinishedPulling="2026-03-13 20:34:40.044567929 +0000 UTC m=+440.060650332" observedRunningTime="2026-03-13 20:34:40.547750423 +0000 UTC m=+440.563832856" watchObservedRunningTime="2026-03-13 20:34:40.550138368 +0000 UTC m=+440.566220771" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.609981 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f4273c-6ab2-48dd-af0c-f6f03b91d037" path="/var/lib/kubelet/pods/e9f4273c-6ab2-48dd-af0c-f6f03b91d037/volumes" Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.646529 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-85qtk"] Mar 13 20:34:40 crc kubenswrapper[5029]: W0313 20:34:40.664123 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5d3a430_d51f_4833_bb68_08772c2daafa.slice/crio-150a2cd514913cfa54dbbb89417ebb04d0c378b88ca14fc351737584feb5a616 WatchSource:0}: Error finding container 150a2cd514913cfa54dbbb89417ebb04d0c378b88ca14fc351737584feb5a616: Status 404 returned error can't find the container with id 150a2cd514913cfa54dbbb89417ebb04d0c378b88ca14fc351737584feb5a616 Mar 13 20:34:40 crc kubenswrapper[5029]: I0313 20:34:40.907185 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-fc8b9c7b8-f6qhh" Mar 13 20:34:41 crc kubenswrapper[5029]: I0313 20:34:41.501246 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" event={"ID":"d5d3a430-d51f-4833-bb68-08772c2daafa","Type":"ContainerStarted","Data":"f14d3b21bba2fc37a11e12832f6b83cc1619b4d2070d12a794fe497e3339275a"} Mar 13 20:34:41 crc kubenswrapper[5029]: I0313 20:34:41.501598 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" event={"ID":"d5d3a430-d51f-4833-bb68-08772c2daafa","Type":"ContainerStarted","Data":"150a2cd514913cfa54dbbb89417ebb04d0c378b88ca14fc351737584feb5a616"} Mar 13 20:34:41 crc kubenswrapper[5029]: I0313 20:34:41.518152 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" podStartSLOduration=2.518130753 podStartE2EDuration="2.518130753s" podCreationTimestamp="2026-03-13 20:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:34:41.518117303 +0000 UTC m=+441.534199726" watchObservedRunningTime="2026-03-13 20:34:41.518130753 +0000 UTC m=+441.534213156" Mar 13 20:34:42 crc kubenswrapper[5029]: I0313 20:34:42.505570 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:34:43 crc kubenswrapper[5029]: I0313 20:34:43.162599 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:43 crc kubenswrapper[5029]: I0313 20:34:43.162943 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:43 crc kubenswrapper[5029]: I0313 20:34:43.365391 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:43 crc kubenswrapper[5029]: I0313 20:34:43.365451 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:43 crc kubenswrapper[5029]: I0313 20:34:43.401503 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:43 crc kubenswrapper[5029]: I0313 20:34:43.545837 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xccj6" Mar 13 20:34:44 crc kubenswrapper[5029]: I0313 20:34:44.203594 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4tpjt" podUID="ea119203-d4b1-426b-aa6e-4b49cb01f3a7" containerName="registry-server" probeResult="failure" output=< Mar 13 20:34:44 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 20:34:44 crc kubenswrapper[5029]: > Mar 13 20:34:44 crc kubenswrapper[5029]: I0313 20:34:44.997552 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:44 crc kubenswrapper[5029]: I0313 20:34:44.997617 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:45 crc kubenswrapper[5029]: I0313 20:34:45.044425 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:45 crc kubenswrapper[5029]: I0313 20:34:45.562818 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4ssbt" Mar 13 20:34:46 crc kubenswrapper[5029]: I0313 20:34:46.016106 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:46 crc kubenswrapper[5029]: I0313 20:34:46.016991 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:46 crc kubenswrapper[5029]: I0313 20:34:46.053379 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:46 crc kubenswrapper[5029]: I0313 20:34:46.566335 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6bpnw" Mar 13 20:34:51 crc kubenswrapper[5029]: I0313 20:34:51.787502 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bbbd65785-q2c2p"] Mar 13 20:34:51 crc kubenswrapper[5029]: I0313 20:34:51.789931 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" podUID="630bc9b5-16f3-435b-bbb0-35d079cd837f" containerName="controller-manager" containerID="cri-o://19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a" gracePeriod=30 Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.171837 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.343688 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-client-ca\") pod \"630bc9b5-16f3-435b-bbb0-35d079cd837f\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.343801 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-config\") pod \"630bc9b5-16f3-435b-bbb0-35d079cd837f\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.343867 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrcjw\" (UniqueName: \"kubernetes.io/projected/630bc9b5-16f3-435b-bbb0-35d079cd837f-kube-api-access-lrcjw\") pod \"630bc9b5-16f3-435b-bbb0-35d079cd837f\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.343906 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-proxy-ca-bundles\") pod \"630bc9b5-16f3-435b-bbb0-35d079cd837f\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.343951 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/630bc9b5-16f3-435b-bbb0-35d079cd837f-serving-cert\") pod \"630bc9b5-16f3-435b-bbb0-35d079cd837f\" (UID: \"630bc9b5-16f3-435b-bbb0-35d079cd837f\") " Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.344644 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-client-ca" (OuterVolumeSpecName: "client-ca") pod "630bc9b5-16f3-435b-bbb0-35d079cd837f" (UID: "630bc9b5-16f3-435b-bbb0-35d079cd837f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.344820 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "630bc9b5-16f3-435b-bbb0-35d079cd837f" (UID: "630bc9b5-16f3-435b-bbb0-35d079cd837f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.345425 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-config" (OuterVolumeSpecName: "config") pod "630bc9b5-16f3-435b-bbb0-35d079cd837f" (UID: "630bc9b5-16f3-435b-bbb0-35d079cd837f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.351172 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630bc9b5-16f3-435b-bbb0-35d079cd837f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "630bc9b5-16f3-435b-bbb0-35d079cd837f" (UID: "630bc9b5-16f3-435b-bbb0-35d079cd837f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.351208 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630bc9b5-16f3-435b-bbb0-35d079cd837f-kube-api-access-lrcjw" (OuterVolumeSpecName: "kube-api-access-lrcjw") pod "630bc9b5-16f3-435b-bbb0-35d079cd837f" (UID: "630bc9b5-16f3-435b-bbb0-35d079cd837f"). InnerVolumeSpecName "kube-api-access-lrcjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.445976 5029 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/630bc9b5-16f3-435b-bbb0-35d079cd837f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.446022 5029 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.446033 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.446045 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrcjw\" (UniqueName: \"kubernetes.io/projected/630bc9b5-16f3-435b-bbb0-35d079cd837f-kube-api-access-lrcjw\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.446061 5029 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/630bc9b5-16f3-435b-bbb0-35d079cd837f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.561935 5029 generic.go:334] "Generic (PLEG): container finished" podID="630bc9b5-16f3-435b-bbb0-35d079cd837f" containerID="19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a" exitCode=0 Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.561989 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" event={"ID":"630bc9b5-16f3-435b-bbb0-35d079cd837f","Type":"ContainerDied","Data":"19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a"} Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.562010 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.562032 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bbbd65785-q2c2p" event={"ID":"630bc9b5-16f3-435b-bbb0-35d079cd837f","Type":"ContainerDied","Data":"2df7c731fadc521d0da566561d137fb9f17e14cf402e3350c9a1ee84c89e5df8"} Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.562054 5029 scope.go:117] "RemoveContainer" containerID="19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.585091 5029 scope.go:117] "RemoveContainer" containerID="19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a" Mar 13 20:34:52 crc kubenswrapper[5029]: E0313 20:34:52.585960 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a\": container with ID starting with 19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a not found: ID does not exist" containerID="19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.586006 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a"} err="failed to get container status \"19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a\": rpc error: code = NotFound desc = could not find container \"19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a\": container with ID starting with 19731fca0fc7dd4893d95f371cea7e945678612e2e0ef7258df6ca61e7643a7a not found: ID does not exist" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.591914 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bbbd65785-q2c2p"] Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.596377 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bbbd65785-q2c2p"] Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.605608 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="630bc9b5-16f3-435b-bbb0-35d079cd837f" path="/var/lib/kubelet/pods/630bc9b5-16f3-435b-bbb0-35d079cd837f/volumes" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.857639 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w"] Mar 13 20:34:52 crc kubenswrapper[5029]: E0313 20:34:52.857920 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630bc9b5-16f3-435b-bbb0-35d079cd837f" containerName="controller-manager" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.857937 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="630bc9b5-16f3-435b-bbb0-35d079cd837f" containerName="controller-manager" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.858061 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="630bc9b5-16f3-435b-bbb0-35d079cd837f" containerName="controller-manager" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.858479 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.861435 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.861505 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.862190 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.862219 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.862461 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.862950 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.869663 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.871815 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w"] Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.953227 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-client-ca\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.953280 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41b3c27-be90-40d2-b542-7f4acaac0edb-serving-cert\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.953332 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-config\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.953495 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w65f7\" (UniqueName: \"kubernetes.io/projected/f41b3c27-be90-40d2-b542-7f4acaac0edb-kube-api-access-w65f7\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:52 crc kubenswrapper[5029]: I0313 20:34:52.953606 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-proxy-ca-bundles\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.055499 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-proxy-ca-bundles\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.056974 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-proxy-ca-bundles\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.057159 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-client-ca\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.057240 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41b3c27-be90-40d2-b542-7f4acaac0edb-serving-cert\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.057945 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-config\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.057975 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w65f7\" (UniqueName: \"kubernetes.io/projected/f41b3c27-be90-40d2-b542-7f4acaac0edb-kube-api-access-w65f7\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.057985 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-client-ca\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.058905 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41b3c27-be90-40d2-b542-7f4acaac0edb-config\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.061543 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41b3c27-be90-40d2-b542-7f4acaac0edb-serving-cert\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.086987 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w65f7\" (UniqueName: \"kubernetes.io/projected/f41b3c27-be90-40d2-b542-7f4acaac0edb-kube-api-access-w65f7\") pod \"controller-manager-86d9d4c6cb-84v4w\" (UID: \"f41b3c27-be90-40d2-b542-7f4acaac0edb\") " pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.175564 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.217664 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.255362 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4tpjt" Mar 13 20:34:53 crc kubenswrapper[5029]: I0313 20:34:53.589809 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w"] Mar 13 20:34:53 crc kubenswrapper[5029]: W0313 20:34:53.596834 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf41b3c27_be90_40d2_b542_7f4acaac0edb.slice/crio-905125a486282f5df0ff0355b9be38ca34c3826d6f51363fcb03fe403f2829d6 WatchSource:0}: Error finding container 905125a486282f5df0ff0355b9be38ca34c3826d6f51363fcb03fe403f2829d6: Status 404 returned error can't find the container with id 905125a486282f5df0ff0355b9be38ca34c3826d6f51363fcb03fe403f2829d6 Mar 13 20:34:54 crc kubenswrapper[5029]: I0313 20:34:54.576151 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" event={"ID":"f41b3c27-be90-40d2-b542-7f4acaac0edb","Type":"ContainerStarted","Data":"1cd1c82401508ab993c1ad530e010a6be042c0fbe55427938918c499eee959de"} Mar 13 20:34:54 crc kubenswrapper[5029]: I0313 20:34:54.577132 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:54 crc kubenswrapper[5029]: I0313 20:34:54.577167 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" event={"ID":"f41b3c27-be90-40d2-b542-7f4acaac0edb","Type":"ContainerStarted","Data":"905125a486282f5df0ff0355b9be38ca34c3826d6f51363fcb03fe403f2829d6"} Mar 13 20:34:54 crc kubenswrapper[5029]: I0313 20:34:54.582419 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" Mar 13 20:34:54 crc kubenswrapper[5029]: I0313 20:34:54.594708 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86d9d4c6cb-84v4w" podStartSLOduration=3.594688944 podStartE2EDuration="3.594688944s" podCreationTimestamp="2026-03-13 20:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:34:54.594028846 +0000 UTC m=+454.610111269" watchObservedRunningTime="2026-03-13 20:34:54.594688944 +0000 UTC m=+454.610771347" Mar 13 20:35:00 crc kubenswrapper[5029]: I0313 20:35:00.114903 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-85qtk" Mar 13 20:35:00 crc kubenswrapper[5029]: I0313 20:35:00.205198 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lbggs"] Mar 13 20:35:01 crc kubenswrapper[5029]: I0313 20:35:01.950365 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:35:01 crc kubenswrapper[5029]: I0313 20:35:01.950439 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.243021 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" podUID="120ab712-4dde-43e5-8e14-f755accec059" containerName="registry" containerID="cri-o://bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c" gracePeriod=30 Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.680344 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.735574 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"120ab712-4dde-43e5-8e14-f755accec059\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.735629 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvmh\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-kube-api-access-9nvmh\") pod \"120ab712-4dde-43e5-8e14-f755accec059\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.735655 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-registry-tls\") pod \"120ab712-4dde-43e5-8e14-f755accec059\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.735689 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/120ab712-4dde-43e5-8e14-f755accec059-ca-trust-extracted\") pod \"120ab712-4dde-43e5-8e14-f755accec059\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.735765 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-bound-sa-token\") pod \"120ab712-4dde-43e5-8e14-f755accec059\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.735833 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-registry-certificates\") pod \"120ab712-4dde-43e5-8e14-f755accec059\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.735873 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-trusted-ca\") pod \"120ab712-4dde-43e5-8e14-f755accec059\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.735898 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/120ab712-4dde-43e5-8e14-f755accec059-installation-pull-secrets\") pod \"120ab712-4dde-43e5-8e14-f755accec059\" (UID: \"120ab712-4dde-43e5-8e14-f755accec059\") " Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.739814 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "120ab712-4dde-43e5-8e14-f755accec059" (UID: "120ab712-4dde-43e5-8e14-f755accec059"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.739889 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "120ab712-4dde-43e5-8e14-f755accec059" (UID: "120ab712-4dde-43e5-8e14-f755accec059"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.742262 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "120ab712-4dde-43e5-8e14-f755accec059" (UID: "120ab712-4dde-43e5-8e14-f755accec059"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.744713 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "120ab712-4dde-43e5-8e14-f755accec059" (UID: "120ab712-4dde-43e5-8e14-f755accec059"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.746932 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-kube-api-access-9nvmh" (OuterVolumeSpecName: "kube-api-access-9nvmh") pod "120ab712-4dde-43e5-8e14-f755accec059" (UID: "120ab712-4dde-43e5-8e14-f755accec059"). InnerVolumeSpecName "kube-api-access-9nvmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.747373 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/120ab712-4dde-43e5-8e14-f755accec059-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "120ab712-4dde-43e5-8e14-f755accec059" (UID: "120ab712-4dde-43e5-8e14-f755accec059"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.750509 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "120ab712-4dde-43e5-8e14-f755accec059" (UID: "120ab712-4dde-43e5-8e14-f755accec059"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.755074 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120ab712-4dde-43e5-8e14-f755accec059-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "120ab712-4dde-43e5-8e14-f755accec059" (UID: "120ab712-4dde-43e5-8e14-f755accec059"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.764792 5029 generic.go:334] "Generic (PLEG): container finished" podID="120ab712-4dde-43e5-8e14-f755accec059" containerID="bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c" exitCode=0 Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.764842 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" event={"ID":"120ab712-4dde-43e5-8e14-f755accec059","Type":"ContainerDied","Data":"bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c"} Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.764899 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.764923 5029 scope.go:117] "RemoveContainer" containerID="bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.764907 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lbggs" event={"ID":"120ab712-4dde-43e5-8e14-f755accec059","Type":"ContainerDied","Data":"93e3f8ad2f01fe32b11beb1a17b36cd2b3232126c51abaa0a3be9f30f89bd9dc"} Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.785789 5029 scope.go:117] "RemoveContainer" containerID="bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c" Mar 13 20:35:25 crc kubenswrapper[5029]: E0313 20:35:25.786303 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c\": container with ID starting with bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c not found: ID does not exist" containerID="bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.786364 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c"} err="failed to get container status \"bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c\": rpc error: code = NotFound desc = could not find container \"bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c\": container with ID starting with bf49993e140a9337e04ebe03b3816e6b966f3a4728825816d10ebc57e898812c not found: ID does not exist" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.802811 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lbggs"] Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.808181 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lbggs"] Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.837308 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvmh\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-kube-api-access-9nvmh\") on node \"crc\" DevicePath \"\"" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.837347 5029 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.837358 5029 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/120ab712-4dde-43e5-8e14-f755accec059-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.837368 5029 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/120ab712-4dde-43e5-8e14-f755accec059-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.837376 5029 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.837385 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/120ab712-4dde-43e5-8e14-f755accec059-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:35:25 crc kubenswrapper[5029]: I0313 20:35:25.837393 5029 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/120ab712-4dde-43e5-8e14-f755accec059-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 13 20:35:26 crc kubenswrapper[5029]: I0313 20:35:26.607283 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120ab712-4dde-43e5-8e14-f755accec059" path="/var/lib/kubelet/pods/120ab712-4dde-43e5-8e14-f755accec059/volumes" Mar 13 20:35:31 crc kubenswrapper[5029]: I0313 20:35:31.950226 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:35:31 crc kubenswrapper[5029]: I0313 20:35:31.950922 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.127863 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557236-wtjlj"] Mar 13 20:36:00 crc kubenswrapper[5029]: E0313 20:36:00.128913 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120ab712-4dde-43e5-8e14-f755accec059" containerName="registry" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.128930 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="120ab712-4dde-43e5-8e14-f755accec059" containerName="registry" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.129042 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="120ab712-4dde-43e5-8e14-f755accec059" containerName="registry" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.129473 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557236-wtjlj" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.131801 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.131889 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.131799 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.146500 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557236-wtjlj"] Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.207865 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb-kube-api-access-gslh4\") pod \"auto-csr-approver-29557236-wtjlj\" (UID: \"874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb\") " pod="openshift-infra/auto-csr-approver-29557236-wtjlj" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.309062 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb-kube-api-access-gslh4\") pod \"auto-csr-approver-29557236-wtjlj\" (UID: \"874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb\") " pod="openshift-infra/auto-csr-approver-29557236-wtjlj" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.333340 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb-kube-api-access-gslh4\") pod \"auto-csr-approver-29557236-wtjlj\" (UID: \"874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb\") " pod="openshift-infra/auto-csr-approver-29557236-wtjlj" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.489653 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557236-wtjlj" Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.928584 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557236-wtjlj"] Mar 13 20:36:00 crc kubenswrapper[5029]: I0313 20:36:00.964955 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557236-wtjlj" event={"ID":"874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb","Type":"ContainerStarted","Data":"1f3607a37fe3ad33351c29e2e9c6a5aa8ac7a843d9b85715a36d22c2802bfec5"} Mar 13 20:36:01 crc kubenswrapper[5029]: I0313 20:36:01.949783 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:36:01 crc kubenswrapper[5029]: I0313 20:36:01.949915 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:36:01 crc kubenswrapper[5029]: I0313 20:36:01.949982 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:36:01 crc kubenswrapper[5029]: I0313 20:36:01.951359 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"120ec79d685d8e39b184565b1c63047076832380141fec1a83b868fe6ea8eef7"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 20:36:01 crc kubenswrapper[5029]: I0313 20:36:01.951428 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://120ec79d685d8e39b184565b1c63047076832380141fec1a83b868fe6ea8eef7" gracePeriod=600 Mar 13 20:36:02 crc kubenswrapper[5029]: I0313 20:36:02.987341 5029 generic.go:334] "Generic (PLEG): container finished" podID="874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb" containerID="9d2449622bd8dad48704677f7227479939088aa1ce67dafff5ace52e0184bcab" exitCode=0 Mar 13 20:36:02 crc kubenswrapper[5029]: I0313 20:36:02.987456 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557236-wtjlj" event={"ID":"874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb","Type":"ContainerDied","Data":"9d2449622bd8dad48704677f7227479939088aa1ce67dafff5ace52e0184bcab"} Mar 13 20:36:02 crc kubenswrapper[5029]: I0313 20:36:02.992312 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="120ec79d685d8e39b184565b1c63047076832380141fec1a83b868fe6ea8eef7" exitCode=0 Mar 13 20:36:02 crc kubenswrapper[5029]: I0313 20:36:02.992353 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"120ec79d685d8e39b184565b1c63047076832380141fec1a83b868fe6ea8eef7"} Mar 13 20:36:02 crc kubenswrapper[5029]: I0313 20:36:02.992376 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"f8fcc9f784c6978226030105fcd2101ebdcc99b3d39948d8d2fe198f91727390"} Mar 13 20:36:02 crc kubenswrapper[5029]: I0313 20:36:02.992395 5029 scope.go:117] "RemoveContainer" containerID="34d3ccaab80119ba8d1bc57f333ec5b52f3a581cdf4cb7944cdce8f9e342a2d5" Mar 13 20:36:04 crc kubenswrapper[5029]: I0313 20:36:04.307827 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557236-wtjlj" Mar 13 20:36:04 crc kubenswrapper[5029]: I0313 20:36:04.373387 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb-kube-api-access-gslh4\") pod \"874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb\" (UID: \"874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb\") " Mar 13 20:36:04 crc kubenswrapper[5029]: I0313 20:36:04.381787 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb-kube-api-access-gslh4" (OuterVolumeSpecName: "kube-api-access-gslh4") pod "874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb" (UID: "874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb"). InnerVolumeSpecName "kube-api-access-gslh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:36:04 crc kubenswrapper[5029]: I0313 20:36:04.475934 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gslh4\" (UniqueName: \"kubernetes.io/projected/874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb-kube-api-access-gslh4\") on node \"crc\" DevicePath \"\"" Mar 13 20:36:05 crc kubenswrapper[5029]: I0313 20:36:05.014600 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557236-wtjlj" event={"ID":"874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb","Type":"ContainerDied","Data":"1f3607a37fe3ad33351c29e2e9c6a5aa8ac7a843d9b85715a36d22c2802bfec5"} Mar 13 20:36:05 crc kubenswrapper[5029]: I0313 20:36:05.014663 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3607a37fe3ad33351c29e2e9c6a5aa8ac7a843d9b85715a36d22c2802bfec5" Mar 13 20:36:05 crc kubenswrapper[5029]: I0313 20:36:05.014699 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557236-wtjlj" Mar 13 20:36:05 crc kubenswrapper[5029]: I0313 20:36:05.380912 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557230-trnjq"] Mar 13 20:36:05 crc kubenswrapper[5029]: I0313 20:36:05.385865 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557230-trnjq"] Mar 13 20:36:06 crc kubenswrapper[5029]: I0313 20:36:06.614078 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd" path="/var/lib/kubelet/pods/5ddd8ae7-2043-4d10-bd7f-f94801bbb3cd/volumes" Mar 13 20:37:21 crc kubenswrapper[5029]: I0313 20:37:21.036960 5029 scope.go:117] "RemoveContainer" containerID="e1b2e24f22b81535fd96f08f41a9c957514a896f37fe4a81436d8c088be2b20a" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.148891 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557238-f9gcs"] Mar 13 20:38:00 crc kubenswrapper[5029]: E0313 20:38:00.149807 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb" containerName="oc" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.149823 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb" containerName="oc" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.151109 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb" containerName="oc" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.152317 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557238-f9gcs" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.157412 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.158340 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.160279 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.188111 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557238-f9gcs"] Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.270110 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s642x\" (UniqueName: \"kubernetes.io/projected/4bdfb127-5a55-4091-88f0-0d36e140afab-kube-api-access-s642x\") pod \"auto-csr-approver-29557238-f9gcs\" (UID: \"4bdfb127-5a55-4091-88f0-0d36e140afab\") " pod="openshift-infra/auto-csr-approver-29557238-f9gcs" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.372188 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s642x\" (UniqueName: \"kubernetes.io/projected/4bdfb127-5a55-4091-88f0-0d36e140afab-kube-api-access-s642x\") pod \"auto-csr-approver-29557238-f9gcs\" (UID: \"4bdfb127-5a55-4091-88f0-0d36e140afab\") " pod="openshift-infra/auto-csr-approver-29557238-f9gcs" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.397979 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s642x\" (UniqueName: \"kubernetes.io/projected/4bdfb127-5a55-4091-88f0-0d36e140afab-kube-api-access-s642x\") pod \"auto-csr-approver-29557238-f9gcs\" (UID: \"4bdfb127-5a55-4091-88f0-0d36e140afab\") " pod="openshift-infra/auto-csr-approver-29557238-f9gcs" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.504835 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557238-f9gcs" Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.744354 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557238-f9gcs"] Mar 13 20:38:00 crc kubenswrapper[5029]: I0313 20:38:00.753803 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 20:38:01 crc kubenswrapper[5029]: I0313 20:38:01.134534 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557238-f9gcs" event={"ID":"4bdfb127-5a55-4091-88f0-0d36e140afab","Type":"ContainerStarted","Data":"4345c960f33e18d135406c1d76e43e8294cd6daf781703880d1501f8a347f624"} Mar 13 20:38:02 crc kubenswrapper[5029]: I0313 20:38:02.142201 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557238-f9gcs" event={"ID":"4bdfb127-5a55-4091-88f0-0d36e140afab","Type":"ContainerStarted","Data":"de5fc7d829ab5dc5803488a115f8948eaeab4f58a3f887ee9246b55998bc7687"} Mar 13 20:38:03 crc kubenswrapper[5029]: I0313 20:38:03.148607 5029 generic.go:334] "Generic (PLEG): container finished" podID="4bdfb127-5a55-4091-88f0-0d36e140afab" containerID="de5fc7d829ab5dc5803488a115f8948eaeab4f58a3f887ee9246b55998bc7687" exitCode=0 Mar 13 20:38:03 crc kubenswrapper[5029]: I0313 20:38:03.148686 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557238-f9gcs" event={"ID":"4bdfb127-5a55-4091-88f0-0d36e140afab","Type":"ContainerDied","Data":"de5fc7d829ab5dc5803488a115f8948eaeab4f58a3f887ee9246b55998bc7687"} Mar 13 20:38:04 crc kubenswrapper[5029]: I0313 20:38:04.368738 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557238-f9gcs" Mar 13 20:38:04 crc kubenswrapper[5029]: I0313 20:38:04.525985 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s642x\" (UniqueName: \"kubernetes.io/projected/4bdfb127-5a55-4091-88f0-0d36e140afab-kube-api-access-s642x\") pod \"4bdfb127-5a55-4091-88f0-0d36e140afab\" (UID: \"4bdfb127-5a55-4091-88f0-0d36e140afab\") " Mar 13 20:38:04 crc kubenswrapper[5029]: I0313 20:38:04.532242 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bdfb127-5a55-4091-88f0-0d36e140afab-kube-api-access-s642x" (OuterVolumeSpecName: "kube-api-access-s642x") pod "4bdfb127-5a55-4091-88f0-0d36e140afab" (UID: "4bdfb127-5a55-4091-88f0-0d36e140afab"). InnerVolumeSpecName "kube-api-access-s642x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:38:04 crc kubenswrapper[5029]: I0313 20:38:04.638098 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s642x\" (UniqueName: \"kubernetes.io/projected/4bdfb127-5a55-4091-88f0-0d36e140afab-kube-api-access-s642x\") on node \"crc\" DevicePath \"\"" Mar 13 20:38:05 crc kubenswrapper[5029]: I0313 20:38:05.160329 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557238-f9gcs" event={"ID":"4bdfb127-5a55-4091-88f0-0d36e140afab","Type":"ContainerDied","Data":"4345c960f33e18d135406c1d76e43e8294cd6daf781703880d1501f8a347f624"} Mar 13 20:38:05 crc kubenswrapper[5029]: I0313 20:38:05.160768 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4345c960f33e18d135406c1d76e43e8294cd6daf781703880d1501f8a347f624" Mar 13 20:38:05 crc kubenswrapper[5029]: I0313 20:38:05.160577 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557238-f9gcs" Mar 13 20:38:05 crc kubenswrapper[5029]: I0313 20:38:05.216283 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557232-9pmbr"] Mar 13 20:38:05 crc kubenswrapper[5029]: I0313 20:38:05.222799 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557232-9pmbr"] Mar 13 20:38:06 crc kubenswrapper[5029]: I0313 20:38:06.605937 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d" path="/var/lib/kubelet/pods/9bfdd95b-3452-4c9b-9df1-8a3ee4c43a3d/volumes" Mar 13 20:38:31 crc kubenswrapper[5029]: I0313 20:38:31.950297 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:38:31 crc kubenswrapper[5029]: I0313 20:38:31.950993 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:39:01 crc kubenswrapper[5029]: I0313 20:39:01.950544 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:39:01 crc kubenswrapper[5029]: I0313 20:39:01.951569 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.377789 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds"] Mar 13 20:39:20 crc kubenswrapper[5029]: E0313 20:39:20.378804 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bdfb127-5a55-4091-88f0-0d36e140afab" containerName="oc" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.378824 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bdfb127-5a55-4091-88f0-0d36e140afab" containerName="oc" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.378954 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bdfb127-5a55-4091-88f0-0d36e140afab" containerName="oc" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.379425 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.381106 5029 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bpn8p" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.382240 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.382454 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.390036 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds"] Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.395008 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-xgksp"] Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.395810 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xgksp" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.401468 5029 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pvgpf" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.406677 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5snlv"] Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.407684 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.411335 5029 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z28gs" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.417817 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5snlv"] Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.422220 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xgksp"] Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.519614 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm4tt\" (UniqueName: \"kubernetes.io/projected/3cf03391-9a73-41f5-96dd-4c3288ef36fc-kube-api-access-vm4tt\") pod \"cert-manager-858654f9db-xgksp\" (UID: \"3cf03391-9a73-41f5-96dd-4c3288ef36fc\") " pod="cert-manager/cert-manager-858654f9db-xgksp" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.519665 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7ds\" (UniqueName: \"kubernetes.io/projected/e348abbe-f890-45ea-906e-28f15df7c05a-kube-api-access-zn7ds\") pod \"cert-manager-cainjector-cf98fcc89-wn4ds\" (UID: \"e348abbe-f890-45ea-906e-28f15df7c05a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.519769 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4czk\" (UniqueName: \"kubernetes.io/projected/916b635c-3f33-4546-80e7-33e61e2bd39c-kube-api-access-f4czk\") pod \"cert-manager-webhook-687f57d79b-5snlv\" (UID: \"916b635c-3f33-4546-80e7-33e61e2bd39c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.620599 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4czk\" (UniqueName: \"kubernetes.io/projected/916b635c-3f33-4546-80e7-33e61e2bd39c-kube-api-access-f4czk\") pod \"cert-manager-webhook-687f57d79b-5snlv\" (UID: \"916b635c-3f33-4546-80e7-33e61e2bd39c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.620689 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm4tt\" (UniqueName: \"kubernetes.io/projected/3cf03391-9a73-41f5-96dd-4c3288ef36fc-kube-api-access-vm4tt\") pod \"cert-manager-858654f9db-xgksp\" (UID: \"3cf03391-9a73-41f5-96dd-4c3288ef36fc\") " pod="cert-manager/cert-manager-858654f9db-xgksp" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.620728 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn7ds\" (UniqueName: \"kubernetes.io/projected/e348abbe-f890-45ea-906e-28f15df7c05a-kube-api-access-zn7ds\") pod \"cert-manager-cainjector-cf98fcc89-wn4ds\" (UID: \"e348abbe-f890-45ea-906e-28f15df7c05a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.636427 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.638337 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.654124 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn7ds\" (UniqueName: \"kubernetes.io/projected/e348abbe-f890-45ea-906e-28f15df7c05a-kube-api-access-zn7ds\") pod \"cert-manager-cainjector-cf98fcc89-wn4ds\" (UID: \"e348abbe-f890-45ea-906e-28f15df7c05a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.669089 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm4tt\" (UniqueName: \"kubernetes.io/projected/3cf03391-9a73-41f5-96dd-4c3288ef36fc-kube-api-access-vm4tt\") pod \"cert-manager-858654f9db-xgksp\" (UID: \"3cf03391-9a73-41f5-96dd-4c3288ef36fc\") " pod="cert-manager/cert-manager-858654f9db-xgksp" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.669543 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4czk\" (UniqueName: \"kubernetes.io/projected/916b635c-3f33-4546-80e7-33e61e2bd39c-kube-api-access-f4czk\") pod \"cert-manager-webhook-687f57d79b-5snlv\" (UID: \"916b635c-3f33-4546-80e7-33e61e2bd39c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.698732 5029 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bpn8p" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.707342 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.710741 5029 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pvgpf" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.719260 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xgksp" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.732317 5029 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z28gs" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.741192 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.910311 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xgksp"] Mar 13 20:39:20 crc kubenswrapper[5029]: I0313 20:39:20.938536 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds"] Mar 13 20:39:20 crc kubenswrapper[5029]: W0313 20:39:20.948395 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode348abbe_f890_45ea_906e_28f15df7c05a.slice/crio-ea0e9e147b2600c34f694e1eb487f7297ee9a9375f20e03c007ba7b7a00c374a WatchSource:0}: Error finding container ea0e9e147b2600c34f694e1eb487f7297ee9a9375f20e03c007ba7b7a00c374a: Status 404 returned error can't find the container with id ea0e9e147b2600c34f694e1eb487f7297ee9a9375f20e03c007ba7b7a00c374a Mar 13 20:39:21 crc kubenswrapper[5029]: I0313 20:39:21.075294 5029 scope.go:117] "RemoveContainer" containerID="df388d9e2c2ed3e1864e910668aa372eb61b5adaf0d6cbc0d5b4c63258cd8343" Mar 13 20:39:21 crc kubenswrapper[5029]: I0313 20:39:21.102335 5029 scope.go:117] "RemoveContainer" containerID="ca1ee83c839bcf07433b909552ab6c7228f0819db2440a6bb4b0c6211b2b405a" Mar 13 20:39:21 crc kubenswrapper[5029]: I0313 20:39:21.117450 5029 scope.go:117] "RemoveContainer" containerID="0db8189b37b301bd214a8dae0bd353f87272f3a26b057bb1280193100c850993" Mar 13 20:39:21 crc kubenswrapper[5029]: I0313 20:39:21.132292 5029 scope.go:117] "RemoveContainer" containerID="6f03dc2e7a7ff9634559dade79a1341b394c88eea7ed16a2dfdaf5f5785d5647" Mar 13 20:39:21 crc kubenswrapper[5029]: I0313 20:39:21.186613 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5snlv"] Mar 13 20:39:21 crc kubenswrapper[5029]: W0313 20:39:21.192128 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod916b635c_3f33_4546_80e7_33e61e2bd39c.slice/crio-1561eca13ed77b2d58782b498860018ebd50a51698fbb40ac677ec28f1519551 WatchSource:0}: Error finding container 1561eca13ed77b2d58782b498860018ebd50a51698fbb40ac677ec28f1519551: Status 404 returned error can't find the container with id 1561eca13ed77b2d58782b498860018ebd50a51698fbb40ac677ec28f1519551 Mar 13 20:39:21 crc kubenswrapper[5029]: I0313 20:39:21.633634 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" event={"ID":"916b635c-3f33-4546-80e7-33e61e2bd39c","Type":"ContainerStarted","Data":"1561eca13ed77b2d58782b498860018ebd50a51698fbb40ac677ec28f1519551"} Mar 13 20:39:21 crc kubenswrapper[5029]: I0313 20:39:21.635765 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds" event={"ID":"e348abbe-f890-45ea-906e-28f15df7c05a","Type":"ContainerStarted","Data":"ea0e9e147b2600c34f694e1eb487f7297ee9a9375f20e03c007ba7b7a00c374a"} Mar 13 20:39:21 crc kubenswrapper[5029]: I0313 20:39:21.636660 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xgksp" event={"ID":"3cf03391-9a73-41f5-96dd-4c3288ef36fc","Type":"ContainerStarted","Data":"985ef7492fd757ea018edcd8d8bb7ad8db184b0431d303aa6417a476b09c0639"} Mar 13 20:39:25 crc kubenswrapper[5029]: I0313 20:39:25.669746 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" event={"ID":"916b635c-3f33-4546-80e7-33e61e2bd39c","Type":"ContainerStarted","Data":"db11c575a8dded7e0804821f56a80c417d28f84011fb816ce07c080e2def9f9b"} Mar 13 20:39:25 crc kubenswrapper[5029]: I0313 20:39:25.671620 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" Mar 13 20:39:25 crc kubenswrapper[5029]: I0313 20:39:25.673077 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds" event={"ID":"e348abbe-f890-45ea-906e-28f15df7c05a","Type":"ContainerStarted","Data":"a501816876648189d1ec4824479216e8d29ce099be652c2a0c28bae4bccebcf7"} Mar 13 20:39:25 crc kubenswrapper[5029]: I0313 20:39:25.674501 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xgksp" event={"ID":"3cf03391-9a73-41f5-96dd-4c3288ef36fc","Type":"ContainerStarted","Data":"3034236c74dbe6adb9748c7e623eb405ede7649266b449d0276519f9b63924b9"} Mar 13 20:39:25 crc kubenswrapper[5029]: I0313 20:39:25.689789 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" podStartSLOduration=2.356563543 podStartE2EDuration="5.689771968s" podCreationTimestamp="2026-03-13 20:39:20 +0000 UTC" firstStartedPulling="2026-03-13 20:39:21.194390759 +0000 UTC m=+721.210473162" lastFinishedPulling="2026-03-13 20:39:24.527599184 +0000 UTC m=+724.543681587" observedRunningTime="2026-03-13 20:39:25.688102693 +0000 UTC m=+725.704185106" watchObservedRunningTime="2026-03-13 20:39:25.689771968 +0000 UTC m=+725.705854391" Mar 13 20:39:25 crc kubenswrapper[5029]: I0313 20:39:25.708450 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-xgksp" podStartSLOduration=2.109923209 podStartE2EDuration="5.708431285s" podCreationTimestamp="2026-03-13 20:39:20 +0000 UTC" firstStartedPulling="2026-03-13 20:39:20.937135317 +0000 UTC m=+720.953217720" lastFinishedPulling="2026-03-13 20:39:24.535643393 +0000 UTC m=+724.551725796" observedRunningTime="2026-03-13 20:39:25.700431887 +0000 UTC m=+725.716514330" watchObservedRunningTime="2026-03-13 20:39:25.708431285 +0000 UTC m=+725.724513678" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.485996 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wn4ds" podStartSLOduration=6.896993536 podStartE2EDuration="10.485978453s" podCreationTimestamp="2026-03-13 20:39:20 +0000 UTC" firstStartedPulling="2026-03-13 20:39:20.951417935 +0000 UTC m=+720.967500338" lastFinishedPulling="2026-03-13 20:39:24.540402852 +0000 UTC m=+724.556485255" observedRunningTime="2026-03-13 20:39:25.716682159 +0000 UTC m=+725.732764562" watchObservedRunningTime="2026-03-13 20:39:30.485978453 +0000 UTC m=+730.502060856" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.486495 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v2xrv"] Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.486901 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26" gracePeriod=30 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.486894 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="nbdb" containerID="cri-o://a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267" gracePeriod=30 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.486937 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="northd" containerID="cri-o://17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83" gracePeriod=30 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.486991 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="sbdb" containerID="cri-o://442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9" gracePeriod=30 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.487022 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kube-rbac-proxy-node" containerID="cri-o://96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f" gracePeriod=30 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.487052 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovn-acl-logging" containerID="cri-o://d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5" gracePeriod=30 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.486862 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovn-controller" containerID="cri-o://086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57" gracePeriod=30 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.513873 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" containerID="cri-o://f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4" gracePeriod=30 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.712113 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovnkube-controller/3.log" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.714174 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovn-acl-logging/0.log" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.714673 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovn-controller/0.log" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715064 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4" exitCode=0 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715089 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83" exitCode=0 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715097 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26" exitCode=0 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715105 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f" exitCode=0 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715113 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5" exitCode=143 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715120 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57" exitCode=143 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715173 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4"} Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715243 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83"} Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715260 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26"} Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715272 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f"} Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715287 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5"} Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715300 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57"} Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.715326 5029 scope.go:117] "RemoveContainer" containerID="536159cac53ffe7b3ea9e7028fe899a2da8a567f204be26808a4a5fcde0b9364" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.719566 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/2.log" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.720088 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/1.log" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.720139 5029 generic.go:334] "Generic (PLEG): container finished" podID="08946f02-ffb6-404b-b25c-6c261e8c2633" containerID="a15b0ae3ffa521840adc6903e498024f19ac00b1f6f98a7564d70fbded2c3161" exitCode=2 Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.720166 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2thxr" event={"ID":"08946f02-ffb6-404b-b25c-6c261e8c2633","Type":"ContainerDied","Data":"a15b0ae3ffa521840adc6903e498024f19ac00b1f6f98a7564d70fbded2c3161"} Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.720642 5029 scope.go:117] "RemoveContainer" containerID="a15b0ae3ffa521840adc6903e498024f19ac00b1f6f98a7564d70fbded2c3161" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.720988 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2thxr_openshift-multus(08946f02-ffb6-404b-b25c-6c261e8c2633)\"" pod="openshift-multus/multus-2thxr" podUID="08946f02-ffb6-404b-b25c-6c261e8c2633" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.747161 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-5snlv" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.794461 5029 scope.go:117] "RemoveContainer" containerID="8c5d484f7b85bd270eb0c45d42d4c4dd414a582a585cb29d4e3fd36e4cd8560c" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.832741 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovn-acl-logging/0.log" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.834151 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovn-controller/0.log" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.840675 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.890926 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nj8sw"] Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891196 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891230 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891246 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891253 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891264 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891275 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891462 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891469 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891483 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891490 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891503 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="nbdb" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891510 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="nbdb" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891522 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kubecfg-setup" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891529 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kubecfg-setup" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891537 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="northd" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891545 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="northd" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891560 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovn-acl-logging" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891566 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovn-acl-logging" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891577 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kube-rbac-proxy-node" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891587 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kube-rbac-proxy-node" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891598 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="sbdb" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891605 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="sbdb" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891614 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovn-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891621 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovn-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891742 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891755 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891769 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="nbdb" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891782 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovn-acl-logging" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891793 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891801 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="northd" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891809 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891842 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovn-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891870 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="sbdb" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891881 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="kube-rbac-proxy-node" Mar 13 20:39:30 crc kubenswrapper[5029]: E0313 20:39:30.891983 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.891991 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.892110 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.892121 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerName="ovnkube-controller" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.894506 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962538 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-netns\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962606 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5nhv\" (UniqueName: \"kubernetes.io/projected/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-kube-api-access-s5nhv\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962637 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-env-overrides\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962660 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-systemd-units\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962669 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962688 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovn-node-metrics-cert\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962706 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-var-lib-openvswitch\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962731 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-ovn-kubernetes\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962749 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-ovn\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962764 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-systemd\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962789 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-config\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962814 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-bin\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962837 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-etc-openvswitch\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962875 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-node-log\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962899 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-script-lib\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962925 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-slash\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962931 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962960 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962982 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.962984 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963001 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963019 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963021 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963094 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-node-log" (OuterVolumeSpecName: "node-log") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963168 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-slash" (OuterVolumeSpecName: "host-slash") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963297 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963344 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963401 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963382 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-openvswitch\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963641 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-log-socket\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963666 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-kubelet\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963683 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-netd\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963702 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\" (UID: \"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6\") " Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963769 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963782 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-log-socket" (OuterVolumeSpecName: "log-socket") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963792 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.963844 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964084 5029 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964106 5029 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964118 5029 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964127 5029 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-node-log\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964135 5029 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964143 5029 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-slash\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964151 5029 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964161 5029 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-log-socket\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964169 5029 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964177 5029 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964188 5029 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964198 5029 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964207 5029 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964215 5029 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964224 5029 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964233 5029 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.964242 5029 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.968142 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.968427 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-kube-api-access-s5nhv" (OuterVolumeSpecName: "kube-api-access-s5nhv") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "kube-api-access-s5nhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:39:30 crc kubenswrapper[5029]: I0313 20:39:30.975649 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" (UID: "ed9df53f-1a1d-4cbc-997a-79dbe299d2b6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.065811 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovnkube-script-lib\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.065890 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-env-overrides\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.065918 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-cni-bin\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.065939 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-slash\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.065959 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-cni-netd\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.065988 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-node-log\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066031 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-etc-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066052 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-kubelet\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066082 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovn-node-metrics-cert\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066116 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-run-netns\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066150 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066183 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-systemd-units\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066211 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-var-lib-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066241 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066287 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-ovn\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066365 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-systemd\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066402 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovnkube-config\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066422 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xv8d\" (UniqueName: \"kubernetes.io/projected/bfafd916-29a1-4620-89af-6c13f2b18ce3-kube-api-access-7xv8d\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066442 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-log-socket\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066459 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066524 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5nhv\" (UniqueName: \"kubernetes.io/projected/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-kube-api-access-s5nhv\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066537 5029 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.066563 5029 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168194 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-systemd-units\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168247 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-var-lib-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168279 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168303 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-ovn\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168323 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-systemd\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168321 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-systemd-units\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168341 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xv8d\" (UniqueName: \"kubernetes.io/projected/bfafd916-29a1-4620-89af-6c13f2b18ce3-kube-api-access-7xv8d\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168417 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-systemd\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168454 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-var-lib-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168450 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovnkube-config\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168462 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168502 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-log-socket\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168401 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-ovn\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168486 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-log-socket\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168572 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168631 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovnkube-script-lib\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168668 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-env-overrides\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168681 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-run-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168701 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-cni-bin\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168723 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-slash\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168747 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-cni-netd\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168770 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-node-log\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168793 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-etc-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168824 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-kubelet\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168920 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovn-node-metrics-cert\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.168959 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-run-netns\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169002 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169055 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-slash\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169088 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169116 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-cni-bin\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169128 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-cni-netd\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169152 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-run-netns\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169165 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-etc-openvswitch\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169132 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-host-kubelet\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169190 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-env-overrides\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169304 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovnkube-config\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169392 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovnkube-script-lib\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.169431 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bfafd916-29a1-4620-89af-6c13f2b18ce3-node-log\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.173186 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bfafd916-29a1-4620-89af-6c13f2b18ce3-ovn-node-metrics-cert\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.184476 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xv8d\" (UniqueName: \"kubernetes.io/projected/bfafd916-29a1-4620-89af-6c13f2b18ce3-kube-api-access-7xv8d\") pod \"ovnkube-node-nj8sw\" (UID: \"bfafd916-29a1-4620-89af-6c13f2b18ce3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.207522 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.727293 5029 generic.go:334] "Generic (PLEG): container finished" podID="bfafd916-29a1-4620-89af-6c13f2b18ce3" containerID="4fe2818b0346ab5aae4f780ed6768f04b45a516878310a421bea9357afb58daf" exitCode=0 Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.727375 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerDied","Data":"4fe2818b0346ab5aae4f780ed6768f04b45a516878310a421bea9357afb58daf"} Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.727405 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"820d2f6f4d65ca7b70ca487125553d3a3d510ffedca73db1cd9f35684129fe44"} Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.735950 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovn-acl-logging/0.log" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.736594 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v2xrv_ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/ovn-controller/0.log" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.737116 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9" exitCode=0 Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.737175 5029 generic.go:334] "Generic (PLEG): container finished" podID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" containerID="a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267" exitCode=0 Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.737189 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9"} Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.737235 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267"} Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.737248 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" event={"ID":"ed9df53f-1a1d-4cbc-997a-79dbe299d2b6","Type":"ContainerDied","Data":"6e515bbf26769a7aaf362f12e9fc8f01a21092122949f8788413ba54bd17ba2c"} Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.737267 5029 scope.go:117] "RemoveContainer" containerID="f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.737329 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v2xrv" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.741330 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/2.log" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.766403 5029 scope.go:117] "RemoveContainer" containerID="442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.782232 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v2xrv"] Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.786411 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v2xrv"] Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.814643 5029 scope.go:117] "RemoveContainer" containerID="a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.845763 5029 scope.go:117] "RemoveContainer" containerID="17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.860125 5029 scope.go:117] "RemoveContainer" containerID="f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.878785 5029 scope.go:117] "RemoveContainer" containerID="96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.897599 5029 scope.go:117] "RemoveContainer" containerID="d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.919136 5029 scope.go:117] "RemoveContainer" containerID="086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.940149 5029 scope.go:117] "RemoveContainer" containerID="bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.954239 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.954306 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.954356 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.955006 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8fcc9f784c6978226030105fcd2101ebdcc99b3d39948d8d2fe198f91727390"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.955063 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://f8fcc9f784c6978226030105fcd2101ebdcc99b3d39948d8d2fe198f91727390" gracePeriod=600 Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.978828 5029 scope.go:117] "RemoveContainer" containerID="f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.979366 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4\": container with ID starting with f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4 not found: ID does not exist" containerID="f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.979401 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4"} err="failed to get container status \"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4\": rpc error: code = NotFound desc = could not find container \"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4\": container with ID starting with f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.979458 5029 scope.go:117] "RemoveContainer" containerID="442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.980010 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\": container with ID starting with 442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9 not found: ID does not exist" containerID="442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.980034 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9"} err="failed to get container status \"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\": rpc error: code = NotFound desc = could not find container \"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\": container with ID starting with 442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.980049 5029 scope.go:117] "RemoveContainer" containerID="a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.980420 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\": container with ID starting with a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267 not found: ID does not exist" containerID="a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.980475 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267"} err="failed to get container status \"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\": rpc error: code = NotFound desc = could not find container \"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\": container with ID starting with a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.980509 5029 scope.go:117] "RemoveContainer" containerID="17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.980895 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\": container with ID starting with 17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83 not found: ID does not exist" containerID="17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.980919 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83"} err="failed to get container status \"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\": rpc error: code = NotFound desc = could not find container \"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\": container with ID starting with 17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.980950 5029 scope.go:117] "RemoveContainer" containerID="f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.981172 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\": container with ID starting with f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26 not found: ID does not exist" containerID="f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.981204 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26"} err="failed to get container status \"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\": rpc error: code = NotFound desc = could not find container \"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\": container with ID starting with f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.981226 5029 scope.go:117] "RemoveContainer" containerID="96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.982228 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\": container with ID starting with 96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f not found: ID does not exist" containerID="96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.982252 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f"} err="failed to get container status \"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\": rpc error: code = NotFound desc = could not find container \"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\": container with ID starting with 96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.982290 5029 scope.go:117] "RemoveContainer" containerID="d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.982566 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\": container with ID starting with d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5 not found: ID does not exist" containerID="d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.982586 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5"} err="failed to get container status \"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\": rpc error: code = NotFound desc = could not find container \"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\": container with ID starting with d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.982600 5029 scope.go:117] "RemoveContainer" containerID="086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.983139 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\": container with ID starting with 086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57 not found: ID does not exist" containerID="086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.983175 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57"} err="failed to get container status \"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\": rpc error: code = NotFound desc = could not find container \"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\": container with ID starting with 086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.983187 5029 scope.go:117] "RemoveContainer" containerID="bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0" Mar 13 20:39:31 crc kubenswrapper[5029]: E0313 20:39:31.983702 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\": container with ID starting with bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0 not found: ID does not exist" containerID="bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.983721 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0"} err="failed to get container status \"bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\": rpc error: code = NotFound desc = could not find container \"bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\": container with ID starting with bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.983739 5029 scope.go:117] "RemoveContainer" containerID="f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.984421 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4"} err="failed to get container status \"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4\": rpc error: code = NotFound desc = could not find container \"f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4\": container with ID starting with f436e4725b08a4d4c4227b758e7736a5bc69488a401b0acac613039ddc0644e4 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.984456 5029 scope.go:117] "RemoveContainer" containerID="442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.985622 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9"} err="failed to get container status \"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\": rpc error: code = NotFound desc = could not find container \"442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9\": container with ID starting with 442c6e68139e8d50be4de0e19b49ddace8128ec8e85aa12ecd2992de6b59afa9 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.985650 5029 scope.go:117] "RemoveContainer" containerID="a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.986146 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267"} err="failed to get container status \"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\": rpc error: code = NotFound desc = could not find container \"a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267\": container with ID starting with a15f60cc89673f8d1f4af72c1143385d5f9dfbbd032e517bcc54965329b08267 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.986191 5029 scope.go:117] "RemoveContainer" containerID="17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.986534 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83"} err="failed to get container status \"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\": rpc error: code = NotFound desc = could not find container \"17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83\": container with ID starting with 17bcc1e37bbfb07b2daf78aefd021a425556f8592ebb7855c200b0cfd8f66a83 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.986574 5029 scope.go:117] "RemoveContainer" containerID="f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.986813 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26"} err="failed to get container status \"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\": rpc error: code = NotFound desc = could not find container \"f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26\": container with ID starting with f835c657bc9c55d1db0e6eebb50d17a7f4dc0139708865f939ca31f100958f26 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.986831 5029 scope.go:117] "RemoveContainer" containerID="96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.987100 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f"} err="failed to get container status \"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\": rpc error: code = NotFound desc = could not find container \"96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f\": container with ID starting with 96285a9d17cc06db806b33b01c997650eafa365590d763045d634b305e06be7f not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.987147 5029 scope.go:117] "RemoveContainer" containerID="d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.987501 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5"} err="failed to get container status \"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\": rpc error: code = NotFound desc = could not find container \"d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5\": container with ID starting with d811c8ae1927b26292577bbd56a6b306cb3d0fa0cdcac6e7ce5ae6cdd9292ea5 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.987521 5029 scope.go:117] "RemoveContainer" containerID="086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.987745 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57"} err="failed to get container status \"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\": rpc error: code = NotFound desc = could not find container \"086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57\": container with ID starting with 086f970be2c2245c9686685c2cebe1f4547fd038bcea7ec14751b81ed5c2be57 not found: ID does not exist" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.987764 5029 scope.go:117] "RemoveContainer" containerID="bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0" Mar 13 20:39:31 crc kubenswrapper[5029]: I0313 20:39:31.988001 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0"} err="failed to get container status \"bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\": rpc error: code = NotFound desc = could not find container \"bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0\": container with ID starting with bba3a9055212a96d6e465f5be88972fb130b12dc06005e0bad69447582b91cc0 not found: ID does not exist" Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.611812 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed9df53f-1a1d-4cbc-997a-79dbe299d2b6" path="/var/lib/kubelet/pods/ed9df53f-1a1d-4cbc-997a-79dbe299d2b6/volumes" Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.749039 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="f8fcc9f784c6978226030105fcd2101ebdcc99b3d39948d8d2fe198f91727390" exitCode=0 Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.749130 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"f8fcc9f784c6978226030105fcd2101ebdcc99b3d39948d8d2fe198f91727390"} Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.749209 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"4bbea3ecaf26f1609521229697004331cac38ad489818c6871ecf93d481648d2"} Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.749237 5029 scope.go:117] "RemoveContainer" containerID="120ec79d685d8e39b184565b1c63047076832380141fec1a83b868fe6ea8eef7" Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.755519 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"cb23b4223113a5f4da0a5ad4ddfd11083eb27fa988e0678a51638fd0be143c54"} Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.755567 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"bd724cf0999a81748884f918b616078f17a043586d1560466d9cb60f9c888be4"} Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.755582 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"b8286bd933e0f4943d9092bc86d792ad24ee3c6eb8a967eb7151bfcdf19d425b"} Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.755595 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"52a4b3638daf6ed93c9a7e6348ef9067bb70d7e834e52bbce2023067f289c59f"} Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.755608 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"c0e14d971e73ee6b0e4e3ead9af31556401072982aca37a9134aa8a3aeb79b6f"} Mar 13 20:39:32 crc kubenswrapper[5029]: I0313 20:39:32.755619 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"ef245d33eeadf1149956c1cee28f9e000be6d96e1216324c685efadfbe4e9845"} Mar 13 20:39:34 crc kubenswrapper[5029]: I0313 20:39:34.777169 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"3831e0f6922530b9cdbc5d1d18a10a7462497af31a038a41bfc44935c542ce25"} Mar 13 20:39:37 crc kubenswrapper[5029]: I0313 20:39:37.800997 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" event={"ID":"bfafd916-29a1-4620-89af-6c13f2b18ce3","Type":"ContainerStarted","Data":"5a7c1a16a1298113ab9ff7ffafc9fd68ed60ee13cbbb0a15aa2cbed6c2641326"} Mar 13 20:39:37 crc kubenswrapper[5029]: I0313 20:39:37.801974 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:37 crc kubenswrapper[5029]: I0313 20:39:37.801994 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:37 crc kubenswrapper[5029]: I0313 20:39:37.839389 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:37 crc kubenswrapper[5029]: I0313 20:39:37.876940 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" podStartSLOduration=7.876914093 podStartE2EDuration="7.876914093s" podCreationTimestamp="2026-03-13 20:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:39:37.839033233 +0000 UTC m=+737.855115646" watchObservedRunningTime="2026-03-13 20:39:37.876914093 +0000 UTC m=+737.892996496" Mar 13 20:39:38 crc kubenswrapper[5029]: I0313 20:39:38.806943 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:38 crc kubenswrapper[5029]: I0313 20:39:38.843050 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:39:42 crc kubenswrapper[5029]: I0313 20:39:42.599657 5029 scope.go:117] "RemoveContainer" containerID="a15b0ae3ffa521840adc6903e498024f19ac00b1f6f98a7564d70fbded2c3161" Mar 13 20:39:42 crc kubenswrapper[5029]: E0313 20:39:42.600651 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2thxr_openshift-multus(08946f02-ffb6-404b-b25c-6c261e8c2633)\"" pod="openshift-multus/multus-2thxr" podUID="08946f02-ffb6-404b-b25c-6c261e8c2633" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.503006 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph"] Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.504457 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.507168 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xl4b7" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.507803 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.508030 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.590516 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-log\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.590584 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-data\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.590613 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-run\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.590738 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqhl\" (UniqueName: \"kubernetes.io/projected/224a4b52-5147-4f0a-bda1-25eb237c0512-kube-api-access-zhqhl\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.691429 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqhl\" (UniqueName: \"kubernetes.io/projected/224a4b52-5147-4f0a-bda1-25eb237c0512-kube-api-access-zhqhl\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.692019 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-log\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.692071 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-data\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.692223 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-run\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.692530 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-log\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.692817 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-data\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.692950 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/224a4b52-5147-4f0a-bda1-25eb237c0512-run\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.710522 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqhl\" (UniqueName: \"kubernetes.io/projected/224a4b52-5147-4f0a-bda1-25eb237c0512-kube-api-access-zhqhl\") pod \"ceph\" (UID: \"224a4b52-5147-4f0a-bda1-25eb237c0512\") " pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: I0313 20:39:53.824335 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Mar 13 20:39:53 crc kubenswrapper[5029]: E0313 20:39:53.853459 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:53 crc kubenswrapper[5029]: E0313 20:39:53.870624 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:54 crc kubenswrapper[5029]: I0313 20:39:54.594778 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"224a4b52-5147-4f0a-bda1-25eb237c0512","Type":"ContainerStarted","Data":"a14799e0a402187316e0e796c6590a2a7d2b94be93973a987218f8042375a608"} Mar 13 20:39:54 crc kubenswrapper[5029]: I0313 20:39:54.599779 5029 scope.go:117] "RemoveContainer" containerID="a15b0ae3ffa521840adc6903e498024f19ac00b1f6f98a7564d70fbded2c3161" Mar 13 20:39:55 crc kubenswrapper[5029]: E0313 20:39:55.035914 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:55 crc kubenswrapper[5029]: E0313 20:39:55.056943 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:55 crc kubenswrapper[5029]: I0313 20:39:55.602214 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2thxr_08946f02-ffb6-404b-b25c-6c261e8c2633/kube-multus/2.log" Mar 13 20:39:55 crc kubenswrapper[5029]: I0313 20:39:55.602288 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2thxr" event={"ID":"08946f02-ffb6-404b-b25c-6c261e8c2633","Type":"ContainerStarted","Data":"53d0eb0a3f8031ac3844cde9707407b70ec0a7f6b05e0a9c8f6ff690c7199583"} Mar 13 20:39:56 crc kubenswrapper[5029]: E0313 20:39:56.227341 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:56 crc kubenswrapper[5029]: E0313 20:39:56.241437 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:57 crc kubenswrapper[5029]: E0313 20:39:57.464637 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:57 crc kubenswrapper[5029]: E0313 20:39:57.479418 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:58 crc kubenswrapper[5029]: E0313 20:39:58.614891 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:58 crc kubenswrapper[5029]: E0313 20:39:58.627258 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:59 crc kubenswrapper[5029]: E0313 20:39:59.764456 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:39:59 crc kubenswrapper[5029]: E0313 20:39:59.779558 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.188193 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557240-x4n6h"] Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.190360 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557240-x4n6h" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.194476 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.194769 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.195059 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.207644 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557240-x4n6h"] Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.324435 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v79mh\" (UniqueName: \"kubernetes.io/projected/aac67ec2-0066-4674-9b71-5e10b6385b42-kube-api-access-v79mh\") pod \"auto-csr-approver-29557240-x4n6h\" (UID: \"aac67ec2-0066-4674-9b71-5e10b6385b42\") " pod="openshift-infra/auto-csr-approver-29557240-x4n6h" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.425358 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v79mh\" (UniqueName: \"kubernetes.io/projected/aac67ec2-0066-4674-9b71-5e10b6385b42-kube-api-access-v79mh\") pod \"auto-csr-approver-29557240-x4n6h\" (UID: \"aac67ec2-0066-4674-9b71-5e10b6385b42\") " pod="openshift-infra/auto-csr-approver-29557240-x4n6h" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.448186 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v79mh\" (UniqueName: \"kubernetes.io/projected/aac67ec2-0066-4674-9b71-5e10b6385b42-kube-api-access-v79mh\") pod \"auto-csr-approver-29557240-x4n6h\" (UID: \"aac67ec2-0066-4674-9b71-5e10b6385b42\") " pod="openshift-infra/auto-csr-approver-29557240-x4n6h" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.527916 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557240-x4n6h" Mar 13 20:40:00 crc kubenswrapper[5029]: I0313 20:40:00.958262 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557240-x4n6h"] Mar 13 20:40:00 crc kubenswrapper[5029]: E0313 20:40:00.964949 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:00 crc kubenswrapper[5029]: E0313 20:40:00.978451 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:01 crc kubenswrapper[5029]: I0313 20:40:01.226967 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj8sw" Mar 13 20:40:02 crc kubenswrapper[5029]: E0313 20:40:02.105325 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:02 crc kubenswrapper[5029]: E0313 20:40:02.119035 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:03 crc kubenswrapper[5029]: E0313 20:40:03.281835 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:03 crc kubenswrapper[5029]: E0313 20:40:03.296686 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:04 crc kubenswrapper[5029]: E0313 20:40:04.452286 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:04 crc kubenswrapper[5029]: E0313 20:40:04.464907 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:05 crc kubenswrapper[5029]: E0313 20:40:05.623566 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:05 crc kubenswrapper[5029]: E0313 20:40:05.635764 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:06 crc kubenswrapper[5029]: E0313 20:40:06.813243 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:06 crc kubenswrapper[5029]: E0313 20:40:06.826338 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:07 crc kubenswrapper[5029]: E0313 20:40:07.969691 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:07 crc kubenswrapper[5029]: E0313 20:40:07.984058 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:09 crc kubenswrapper[5029]: E0313 20:40:09.116016 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:09 crc kubenswrapper[5029]: E0313 20:40:09.131512 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:10 crc kubenswrapper[5029]: E0313 20:40:10.301751 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:10 crc kubenswrapper[5029]: E0313 20:40:10.315224 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:11 crc kubenswrapper[5029]: E0313 20:40:11.462681 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:11 crc kubenswrapper[5029]: E0313 20:40:11.478676 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:12 crc kubenswrapper[5029]: E0313 20:40:12.661733 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:12 crc kubenswrapper[5029]: E0313 20:40:12.678431 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:13 crc kubenswrapper[5029]: E0313 20:40:13.885194 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:13 crc kubenswrapper[5029]: E0313 20:40:13.904466 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:14 crc kubenswrapper[5029]: I0313 20:40:14.707362 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557240-x4n6h" event={"ID":"aac67ec2-0066-4674-9b71-5e10b6385b42","Type":"ContainerStarted","Data":"088e86588aee1c1a0a45f7f22a06551838d2ad7bc12a0758bbb5c02f4ced9ab0"} Mar 13 20:40:14 crc kubenswrapper[5029]: E0313 20:40:14.907257 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/ceph/demo:latest-squid" Mar 13 20:40:14 crc kubenswrapper[5029]: E0313 20:40:14.907480 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceph,Image:quay.io/ceph/demo:latest-squid,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MON_IP,Value:192.168.126.11,ValueFrom:nil,},EnvVar{Name:CEPH_DAEMON,Value:demo,ValueFrom:nil,},EnvVar{Name:CEPH_PUBLIC_NETWORK,Value:0.0.0.0/0,ValueFrom:nil,},EnvVar{Name:DEMO_DAEMONS,Value:osd,mds,rgw,ValueFrom:nil,},EnvVar{Name:CEPH_DEMO_UID,Value:0,ValueFrom:nil,},EnvVar{Name:RGW_NAME,Value:ceph,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/var/lib/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log,ReadOnly:false,MountPath:/var/log/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhqhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceph_openstack(224a4b52-5147-4f0a-bda1-25eb237c0512): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:40:14 crc kubenswrapper[5029]: E0313 20:40:14.909503 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceph" podUID="224a4b52-5147-4f0a-bda1-25eb237c0512" Mar 13 20:40:15 crc kubenswrapper[5029]: E0313 20:40:15.105996 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:15 crc kubenswrapper[5029]: E0313 20:40:15.120764 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:15 crc kubenswrapper[5029]: E0313 20:40:15.714193 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/ceph/demo:latest-squid\\\"\"" pod="openstack/ceph" podUID="224a4b52-5147-4f0a-bda1-25eb237c0512" Mar 13 20:40:16 crc kubenswrapper[5029]: E0313 20:40:16.278299 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:16 crc kubenswrapper[5029]: E0313 20:40:16.295748 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:16 crc kubenswrapper[5029]: I0313 20:40:16.720554 5029 generic.go:334] "Generic (PLEG): container finished" podID="aac67ec2-0066-4674-9b71-5e10b6385b42" containerID="8a8f29775510291207dc2c8e4ec5c2682a4a9da2712a86033ad7d78fdc67714c" exitCode=0 Mar 13 20:40:16 crc kubenswrapper[5029]: I0313 20:40:16.720621 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557240-x4n6h" event={"ID":"aac67ec2-0066-4674-9b71-5e10b6385b42","Type":"ContainerDied","Data":"8a8f29775510291207dc2c8e4ec5c2682a4a9da2712a86033ad7d78fdc67714c"} Mar 13 20:40:17 crc kubenswrapper[5029]: E0313 20:40:17.522982 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:17 crc kubenswrapper[5029]: E0313 20:40:17.539446 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:17 crc kubenswrapper[5029]: I0313 20:40:17.958503 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557240-x4n6h" Mar 13 20:40:18 crc kubenswrapper[5029]: I0313 20:40:18.083813 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v79mh\" (UniqueName: \"kubernetes.io/projected/aac67ec2-0066-4674-9b71-5e10b6385b42-kube-api-access-v79mh\") pod \"aac67ec2-0066-4674-9b71-5e10b6385b42\" (UID: \"aac67ec2-0066-4674-9b71-5e10b6385b42\") " Mar 13 20:40:18 crc kubenswrapper[5029]: I0313 20:40:18.088512 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac67ec2-0066-4674-9b71-5e10b6385b42-kube-api-access-v79mh" (OuterVolumeSpecName: "kube-api-access-v79mh") pod "aac67ec2-0066-4674-9b71-5e10b6385b42" (UID: "aac67ec2-0066-4674-9b71-5e10b6385b42"). InnerVolumeSpecName "kube-api-access-v79mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:40:18 crc kubenswrapper[5029]: I0313 20:40:18.185913 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v79mh\" (UniqueName: \"kubernetes.io/projected/aac67ec2-0066-4674-9b71-5e10b6385b42-kube-api-access-v79mh\") on node \"crc\" DevicePath \"\"" Mar 13 20:40:18 crc kubenswrapper[5029]: E0313 20:40:18.727835 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:18 crc kubenswrapper[5029]: I0313 20:40:18.733724 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557240-x4n6h" event={"ID":"aac67ec2-0066-4674-9b71-5e10b6385b42","Type":"ContainerDied","Data":"088e86588aee1c1a0a45f7f22a06551838d2ad7bc12a0758bbb5c02f4ced9ab0"} Mar 13 20:40:18 crc kubenswrapper[5029]: I0313 20:40:18.733767 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="088e86588aee1c1a0a45f7f22a06551838d2ad7bc12a0758bbb5c02f4ced9ab0" Mar 13 20:40:18 crc kubenswrapper[5029]: I0313 20:40:18.733902 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557240-x4n6h" Mar 13 20:40:18 crc kubenswrapper[5029]: E0313 20:40:18.741893 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:19 crc kubenswrapper[5029]: I0313 20:40:19.018438 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557234-j9kj4"] Mar 13 20:40:19 crc kubenswrapper[5029]: I0313 20:40:19.021488 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557234-j9kj4"] Mar 13 20:40:19 crc kubenswrapper[5029]: E0313 20:40:19.876173 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:19 crc kubenswrapper[5029]: E0313 20:40:19.893895 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:20 crc kubenswrapper[5029]: I0313 20:40:20.605305 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe71349-ba8d-4f87-9e80-2d6a5417b5be" path="/var/lib/kubelet/pods/cbe71349-ba8d-4f87-9e80-2d6a5417b5be/volumes" Mar 13 20:40:21 crc kubenswrapper[5029]: E0313 20:40:21.057787 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:21 crc kubenswrapper[5029]: E0313 20:40:21.073103 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:21 crc kubenswrapper[5029]: I0313 20:40:21.195740 5029 scope.go:117] "RemoveContainer" containerID="ac984f041fa5cdbd8477204cc564aaa4562e343cc3e0da05d15ac924439238b8" Mar 13 20:40:22 crc kubenswrapper[5029]: E0313 20:40:22.234425 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:22 crc kubenswrapper[5029]: E0313 20:40:22.249393 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:23 crc kubenswrapper[5029]: E0313 20:40:23.410843 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:23 crc kubenswrapper[5029]: E0313 20:40:23.426598 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:24 crc kubenswrapper[5029]: E0313 20:40:24.615072 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:24 crc kubenswrapper[5029]: E0313 20:40:24.630266 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:25 crc kubenswrapper[5029]: E0313 20:40:25.838612 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:25 crc kubenswrapper[5029]: E0313 20:40:25.862605 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:27 crc kubenswrapper[5029]: E0313 20:40:27.073712 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:27 crc kubenswrapper[5029]: E0313 20:40:27.090898 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:27 crc kubenswrapper[5029]: I0313 20:40:27.797728 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"224a4b52-5147-4f0a-bda1-25eb237c0512","Type":"ContainerStarted","Data":"46b899e581328af69318a3c7b383fbbedd01fd119f9517adec0dd4ddedfd9b84"} Mar 13 20:40:27 crc kubenswrapper[5029]: I0313 20:40:27.814265 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph" podStartSLOduration=1.58740863 podStartE2EDuration="34.814246672s" podCreationTimestamp="2026-03-13 20:39:53 +0000 UTC" firstStartedPulling="2026-03-13 20:39:53.850744098 +0000 UTC m=+753.866826511" lastFinishedPulling="2026-03-13 20:40:27.07758211 +0000 UTC m=+787.093664553" observedRunningTime="2026-03-13 20:40:27.810655505 +0000 UTC m=+787.826737918" watchObservedRunningTime="2026-03-13 20:40:27.814246672 +0000 UTC m=+787.830329075" Mar 13 20:40:28 crc kubenswrapper[5029]: E0313 20:40:28.249887 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:28 crc kubenswrapper[5029]: E0313 20:40:28.267485 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:29 crc kubenswrapper[5029]: E0313 20:40:29.441192 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:29 crc kubenswrapper[5029]: E0313 20:40:29.455030 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:30 crc kubenswrapper[5029]: E0313 20:40:30.660046 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:30 crc kubenswrapper[5029]: E0313 20:40:30.680055 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:31 crc kubenswrapper[5029]: E0313 20:40:31.864722 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:31 crc kubenswrapper[5029]: E0313 20:40:31.882989 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:33 crc kubenswrapper[5029]: E0313 20:40:33.061658 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:33 crc kubenswrapper[5029]: E0313 20:40:33.075535 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:34 crc kubenswrapper[5029]: E0313 20:40:34.222211 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:34 crc kubenswrapper[5029]: E0313 20:40:34.236872 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:35 crc kubenswrapper[5029]: E0313 20:40:35.423878 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:35 crc kubenswrapper[5029]: E0313 20:40:35.440356 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:36 crc kubenswrapper[5029]: E0313 20:40:36.618840 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:36 crc kubenswrapper[5029]: E0313 20:40:36.635450 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:37 crc kubenswrapper[5029]: E0313 20:40:37.780896 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:37 crc kubenswrapper[5029]: E0313 20:40:37.793569 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:38 crc kubenswrapper[5029]: E0313 20:40:38.978650 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:38 crc kubenswrapper[5029]: E0313 20:40:38.994433 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:40 crc kubenswrapper[5029]: E0313 20:40:40.181783 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:40 crc kubenswrapper[5029]: E0313 20:40:40.195744 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:41 crc kubenswrapper[5029]: E0313 20:40:41.382286 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:41 crc kubenswrapper[5029]: E0313 20:40:41.404055 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:42 crc kubenswrapper[5029]: E0313 20:40:42.607926 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:42 crc kubenswrapper[5029]: E0313 20:40:42.623338 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:43 crc kubenswrapper[5029]: E0313 20:40:43.825176 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:43 crc kubenswrapper[5029]: E0313 20:40:43.841162 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:45 crc kubenswrapper[5029]: E0313 20:40:45.025185 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:45 crc kubenswrapper[5029]: E0313 20:40:45.044307 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:46 crc kubenswrapper[5029]: E0313 20:40:46.301052 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:46 crc kubenswrapper[5029]: E0313 20:40:46.320735 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:47 crc kubenswrapper[5029]: E0313 20:40:47.494646 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:47 crc kubenswrapper[5029]: E0313 20:40:47.512325 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:48 crc kubenswrapper[5029]: E0313 20:40:48.668217 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:48 crc kubenswrapper[5029]: E0313 20:40:48.680244 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:49 crc kubenswrapper[5029]: E0313 20:40:49.833131 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:49 crc kubenswrapper[5029]: E0313 20:40:49.847580 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:51 crc kubenswrapper[5029]: E0313 20:40:51.014465 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:51 crc kubenswrapper[5029]: E0313 20:40:51.029588 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:52 crc kubenswrapper[5029]: E0313 20:40:52.202598 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:52 crc kubenswrapper[5029]: E0313 20:40:52.218496 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:53 crc kubenswrapper[5029]: E0313 20:40:53.382229 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:53 crc kubenswrapper[5029]: E0313 20:40:53.398842 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:54 crc kubenswrapper[5029]: E0313 20:40:54.568042 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:54 crc kubenswrapper[5029]: E0313 20:40:54.581362 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:55 crc kubenswrapper[5029]: E0313 20:40:55.753484 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:55 crc kubenswrapper[5029]: E0313 20:40:55.766050 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:56 crc kubenswrapper[5029]: E0313 20:40:56.981403 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:56 crc kubenswrapper[5029]: E0313 20:40:56.995283 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:58 crc kubenswrapper[5029]: E0313 20:40:58.177024 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:58 crc kubenswrapper[5029]: E0313 20:40:58.190674 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:59 crc kubenswrapper[5029]: E0313 20:40:59.398802 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:40:59 crc kubenswrapper[5029]: E0313 20:40:59.411750 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:00 crc kubenswrapper[5029]: E0313 20:41:00.593214 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:00 crc kubenswrapper[5029]: E0313 20:41:00.607952 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:01 crc kubenswrapper[5029]: E0313 20:41:01.821355 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:01 crc kubenswrapper[5029]: E0313 20:41:01.846294 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:03 crc kubenswrapper[5029]: E0313 20:41:03.026193 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:03 crc kubenswrapper[5029]: E0313 20:41:03.041284 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:04 crc kubenswrapper[5029]: E0313 20:41:04.187238 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:04 crc kubenswrapper[5029]: E0313 20:41:04.200689 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:05 crc kubenswrapper[5029]: E0313 20:41:05.336472 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:05 crc kubenswrapper[5029]: E0313 20:41:05.350041 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:06 crc kubenswrapper[5029]: E0313 20:41:06.562669 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:06 crc kubenswrapper[5029]: E0313 20:41:06.577086 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:07 crc kubenswrapper[5029]: E0313 20:41:07.790668 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:07 crc kubenswrapper[5029]: E0313 20:41:07.806751 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:08 crc kubenswrapper[5029]: E0313 20:41:08.941373 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:08 crc kubenswrapper[5029]: E0313 20:41:08.960915 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:10 crc kubenswrapper[5029]: E0313 20:41:10.142445 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:10 crc kubenswrapper[5029]: E0313 20:41:10.158426 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:11 crc kubenswrapper[5029]: E0313 20:41:11.321396 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:11 crc kubenswrapper[5029]: E0313 20:41:11.335298 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:12 crc kubenswrapper[5029]: E0313 20:41:12.528216 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:12 crc kubenswrapper[5029]: E0313 20:41:12.541891 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:13 crc kubenswrapper[5029]: E0313 20:41:13.740763 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:13 crc kubenswrapper[5029]: E0313 20:41:13.754446 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:14 crc kubenswrapper[5029]: E0313 20:41:14.897328 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:14 crc kubenswrapper[5029]: E0313 20:41:14.911842 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:16 crc kubenswrapper[5029]: E0313 20:41:16.109924 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:16 crc kubenswrapper[5029]: E0313 20:41:16.125106 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:17 crc kubenswrapper[5029]: E0313 20:41:17.371809 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:17 crc kubenswrapper[5029]: E0313 20:41:17.389633 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:18 crc kubenswrapper[5029]: E0313 20:41:18.554521 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:18 crc kubenswrapper[5029]: E0313 20:41:18.572266 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:19 crc kubenswrapper[5029]: E0313 20:41:19.733870 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:19 crc kubenswrapper[5029]: E0313 20:41:19.746972 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:20 crc kubenswrapper[5029]: E0313 20:41:20.917049 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:20 crc kubenswrapper[5029]: E0313 20:41:20.931485 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:22 crc kubenswrapper[5029]: E0313 20:41:22.094761 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:22 crc kubenswrapper[5029]: E0313 20:41:22.109240 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:23 crc kubenswrapper[5029]: E0313 20:41:23.271818 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:23 crc kubenswrapper[5029]: E0313 20:41:23.286378 5029 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8280480487919544777, SKID=, AKID=28:7E:37:A3:4E:60:79:1E:16:E8:6F:D7:A4:F6:C1:83:A1:41:D6:0E failed: x509: certificate signed by unknown authority" Mar 13 20:41:23 crc kubenswrapper[5029]: I0313 20:41:23.339132 5029 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 20:41:49 crc kubenswrapper[5029]: E0313 20:41:49.332766 5029 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.181:57018->38.102.83.181:36147: read tcp 38.102.83.181:57018->38.102.83.181:36147: read: connection reset by peer Mar 13 20:41:50 crc kubenswrapper[5029]: E0313 20:41:50.535044 5029 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.181:57094->38.102.83.181:36147: write tcp 38.102.83.181:57094->38.102.83.181:36147: write: broken pipe Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.138047 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557242-hl5b9"] Mar 13 20:42:00 crc kubenswrapper[5029]: E0313 20:42:00.138852 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac67ec2-0066-4674-9b71-5e10b6385b42" containerName="oc" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.138881 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac67ec2-0066-4674-9b71-5e10b6385b42" containerName="oc" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.138999 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac67ec2-0066-4674-9b71-5e10b6385b42" containerName="oc" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.139372 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557242-hl5b9" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.141893 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.142542 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.142543 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.146731 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557242-hl5b9"] Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.317223 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26bv\" (UniqueName: \"kubernetes.io/projected/231cc164-04f6-42e0-ad5e-6b30fb9dbba3-kube-api-access-t26bv\") pod \"auto-csr-approver-29557242-hl5b9\" (UID: \"231cc164-04f6-42e0-ad5e-6b30fb9dbba3\") " pod="openshift-infra/auto-csr-approver-29557242-hl5b9" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.418808 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t26bv\" (UniqueName: \"kubernetes.io/projected/231cc164-04f6-42e0-ad5e-6b30fb9dbba3-kube-api-access-t26bv\") pod \"auto-csr-approver-29557242-hl5b9\" (UID: \"231cc164-04f6-42e0-ad5e-6b30fb9dbba3\") " pod="openshift-infra/auto-csr-approver-29557242-hl5b9" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.435673 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t26bv\" (UniqueName: \"kubernetes.io/projected/231cc164-04f6-42e0-ad5e-6b30fb9dbba3-kube-api-access-t26bv\") pod \"auto-csr-approver-29557242-hl5b9\" (UID: \"231cc164-04f6-42e0-ad5e-6b30fb9dbba3\") " pod="openshift-infra/auto-csr-approver-29557242-hl5b9" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.498677 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557242-hl5b9" Mar 13 20:42:00 crc kubenswrapper[5029]: I0313 20:42:00.688495 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557242-hl5b9"] Mar 13 20:42:01 crc kubenswrapper[5029]: I0313 20:42:01.347887 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557242-hl5b9" event={"ID":"231cc164-04f6-42e0-ad5e-6b30fb9dbba3","Type":"ContainerStarted","Data":"df5a07f16d3752a3ce4a09033b378cf9684d4e524a4a93cd635451fe092efcff"} Mar 13 20:42:01 crc kubenswrapper[5029]: I0313 20:42:01.950612 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:42:01 crc kubenswrapper[5029]: I0313 20:42:01.951260 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:42:02 crc kubenswrapper[5029]: I0313 20:42:02.355117 5029 generic.go:334] "Generic (PLEG): container finished" podID="231cc164-04f6-42e0-ad5e-6b30fb9dbba3" containerID="30f8752bb39d0132715ce80cf200c4e9d200fe4eaabb4fcec56aa42ae4a33712" exitCode=0 Mar 13 20:42:02 crc kubenswrapper[5029]: I0313 20:42:02.355204 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557242-hl5b9" event={"ID":"231cc164-04f6-42e0-ad5e-6b30fb9dbba3","Type":"ContainerDied","Data":"30f8752bb39d0132715ce80cf200c4e9d200fe4eaabb4fcec56aa42ae4a33712"} Mar 13 20:42:03 crc kubenswrapper[5029]: I0313 20:42:03.622593 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557242-hl5b9" Mar 13 20:42:03 crc kubenswrapper[5029]: I0313 20:42:03.793029 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t26bv\" (UniqueName: \"kubernetes.io/projected/231cc164-04f6-42e0-ad5e-6b30fb9dbba3-kube-api-access-t26bv\") pod \"231cc164-04f6-42e0-ad5e-6b30fb9dbba3\" (UID: \"231cc164-04f6-42e0-ad5e-6b30fb9dbba3\") " Mar 13 20:42:03 crc kubenswrapper[5029]: I0313 20:42:03.798629 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231cc164-04f6-42e0-ad5e-6b30fb9dbba3-kube-api-access-t26bv" (OuterVolumeSpecName: "kube-api-access-t26bv") pod "231cc164-04f6-42e0-ad5e-6b30fb9dbba3" (UID: "231cc164-04f6-42e0-ad5e-6b30fb9dbba3"). InnerVolumeSpecName "kube-api-access-t26bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:42:03 crc kubenswrapper[5029]: I0313 20:42:03.894632 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t26bv\" (UniqueName: \"kubernetes.io/projected/231cc164-04f6-42e0-ad5e-6b30fb9dbba3-kube-api-access-t26bv\") on node \"crc\" DevicePath \"\"" Mar 13 20:42:04 crc kubenswrapper[5029]: I0313 20:42:04.367313 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557242-hl5b9" Mar 13 20:42:04 crc kubenswrapper[5029]: I0313 20:42:04.367345 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557242-hl5b9" event={"ID":"231cc164-04f6-42e0-ad5e-6b30fb9dbba3","Type":"ContainerDied","Data":"df5a07f16d3752a3ce4a09033b378cf9684d4e524a4a93cd635451fe092efcff"} Mar 13 20:42:04 crc kubenswrapper[5029]: I0313 20:42:04.367793 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df5a07f16d3752a3ce4a09033b378cf9684d4e524a4a93cd635451fe092efcff" Mar 13 20:42:04 crc kubenswrapper[5029]: I0313 20:42:04.684354 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557236-wtjlj"] Mar 13 20:42:04 crc kubenswrapper[5029]: I0313 20:42:04.687598 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557236-wtjlj"] Mar 13 20:42:06 crc kubenswrapper[5029]: I0313 20:42:06.606932 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb" path="/var/lib/kubelet/pods/874c2ecf-8fa4-4475-9c0d-7bd06aaa3bfb/volumes" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.105365 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59"] Mar 13 20:42:12 crc kubenswrapper[5029]: E0313 20:42:12.106088 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231cc164-04f6-42e0-ad5e-6b30fb9dbba3" containerName="oc" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.106103 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="231cc164-04f6-42e0-ad5e-6b30fb9dbba3" containerName="oc" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.106258 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="231cc164-04f6-42e0-ad5e-6b30fb9dbba3" containerName="oc" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.107030 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.108669 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.116602 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59"] Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.299426 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.299603 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd498\" (UniqueName: \"kubernetes.io/projected/3696e6e7-3920-42fc-8846-f47bfe1ff906-kube-api-access-zd498\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.299671 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.400707 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd498\" (UniqueName: \"kubernetes.io/projected/3696e6e7-3920-42fc-8846-f47bfe1ff906-kube-api-access-zd498\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.400784 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.400867 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.401689 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.401696 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.428458 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd498\" (UniqueName: \"kubernetes.io/projected/3696e6e7-3920-42fc-8846-f47bfe1ff906-kube-api-access-zd498\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.725972 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:12 crc kubenswrapper[5029]: I0313 20:42:12.904485 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59"] Mar 13 20:42:13 crc kubenswrapper[5029]: I0313 20:42:13.418874 5029 generic.go:334] "Generic (PLEG): container finished" podID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerID="fe08950970762eef180643e53b099df8322dccab1d16e5862f7e952e3699e7dc" exitCode=0 Mar 13 20:42:13 crc kubenswrapper[5029]: I0313 20:42:13.418929 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" event={"ID":"3696e6e7-3920-42fc-8846-f47bfe1ff906","Type":"ContainerDied","Data":"fe08950970762eef180643e53b099df8322dccab1d16e5862f7e952e3699e7dc"} Mar 13 20:42:13 crc kubenswrapper[5029]: I0313 20:42:13.418960 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" event={"ID":"3696e6e7-3920-42fc-8846-f47bfe1ff906","Type":"ContainerStarted","Data":"f9273fa9361533a339b22cf4f49f555b4cb5e40ee273bca8b0bb92693aba70db"} Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.209545 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5fsfc"] Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.210984 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.222373 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5fsfc"] Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.325357 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfx6q\" (UniqueName: \"kubernetes.io/projected/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-kube-api-access-vfx6q\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.325413 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-catalog-content\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.325453 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-utilities\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.426650 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-utilities\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.426749 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfx6q\" (UniqueName: \"kubernetes.io/projected/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-kube-api-access-vfx6q\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.426785 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-catalog-content\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.427393 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-catalog-content\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.427430 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-utilities\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.456114 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfx6q\" (UniqueName: \"kubernetes.io/projected/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-kube-api-access-vfx6q\") pod \"redhat-operators-5fsfc\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.563359 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:14 crc kubenswrapper[5029]: I0313 20:42:14.798759 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5fsfc"] Mar 13 20:42:14 crc kubenswrapper[5029]: W0313 20:42:14.807453 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a84dcaa_7bc5_4042_b5ab_d46b812fbd7f.slice/crio-b6d5be9fa21d38d989979b837e29f56884a10c16ea36b9c883dd2b43e97acaa6 WatchSource:0}: Error finding container b6d5be9fa21d38d989979b837e29f56884a10c16ea36b9c883dd2b43e97acaa6: Status 404 returned error can't find the container with id b6d5be9fa21d38d989979b837e29f56884a10c16ea36b9c883dd2b43e97acaa6 Mar 13 20:42:15 crc kubenswrapper[5029]: I0313 20:42:15.434533 5029 generic.go:334] "Generic (PLEG): container finished" podID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerID="efd3988695e8646d7a3fc9db243b59ede6d7ec081310f246f10f9eb8a52929c7" exitCode=0 Mar 13 20:42:15 crc kubenswrapper[5029]: I0313 20:42:15.434938 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" event={"ID":"3696e6e7-3920-42fc-8846-f47bfe1ff906","Type":"ContainerDied","Data":"efd3988695e8646d7a3fc9db243b59ede6d7ec081310f246f10f9eb8a52929c7"} Mar 13 20:42:15 crc kubenswrapper[5029]: I0313 20:42:15.439450 5029 generic.go:334] "Generic (PLEG): container finished" podID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerID="d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb" exitCode=0 Mar 13 20:42:15 crc kubenswrapper[5029]: I0313 20:42:15.439537 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fsfc" event={"ID":"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f","Type":"ContainerDied","Data":"d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb"} Mar 13 20:42:15 crc kubenswrapper[5029]: I0313 20:42:15.439566 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fsfc" event={"ID":"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f","Type":"ContainerStarted","Data":"b6d5be9fa21d38d989979b837e29f56884a10c16ea36b9c883dd2b43e97acaa6"} Mar 13 20:42:16 crc kubenswrapper[5029]: I0313 20:42:16.449914 5029 generic.go:334] "Generic (PLEG): container finished" podID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerID="d7fa72559dbc511f6d8cfdfc04eb06b17cb587a65357918067e904fa895274d1" exitCode=0 Mar 13 20:42:16 crc kubenswrapper[5029]: I0313 20:42:16.450026 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" event={"ID":"3696e6e7-3920-42fc-8846-f47bfe1ff906","Type":"ContainerDied","Data":"d7fa72559dbc511f6d8cfdfc04eb06b17cb587a65357918067e904fa895274d1"} Mar 13 20:42:16 crc kubenswrapper[5029]: I0313 20:42:16.452186 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fsfc" event={"ID":"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f","Type":"ContainerStarted","Data":"5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2"} Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.460255 5029 generic.go:334] "Generic (PLEG): container finished" podID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerID="5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2" exitCode=0 Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.460338 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fsfc" event={"ID":"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f","Type":"ContainerDied","Data":"5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2"} Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.700544 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.775447 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-util\") pod \"3696e6e7-3920-42fc-8846-f47bfe1ff906\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.775565 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-bundle\") pod \"3696e6e7-3920-42fc-8846-f47bfe1ff906\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.775596 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd498\" (UniqueName: \"kubernetes.io/projected/3696e6e7-3920-42fc-8846-f47bfe1ff906-kube-api-access-zd498\") pod \"3696e6e7-3920-42fc-8846-f47bfe1ff906\" (UID: \"3696e6e7-3920-42fc-8846-f47bfe1ff906\") " Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.776750 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-bundle" (OuterVolumeSpecName: "bundle") pod "3696e6e7-3920-42fc-8846-f47bfe1ff906" (UID: "3696e6e7-3920-42fc-8846-f47bfe1ff906"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.782802 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3696e6e7-3920-42fc-8846-f47bfe1ff906-kube-api-access-zd498" (OuterVolumeSpecName: "kube-api-access-zd498") pod "3696e6e7-3920-42fc-8846-f47bfe1ff906" (UID: "3696e6e7-3920-42fc-8846-f47bfe1ff906"). InnerVolumeSpecName "kube-api-access-zd498". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.791027 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-util" (OuterVolumeSpecName: "util") pod "3696e6e7-3920-42fc-8846-f47bfe1ff906" (UID: "3696e6e7-3920-42fc-8846-f47bfe1ff906"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.877430 5029 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-util\") on node \"crc\" DevicePath \"\"" Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.877483 5029 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3696e6e7-3920-42fc-8846-f47bfe1ff906-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:42:17 crc kubenswrapper[5029]: I0313 20:42:17.877497 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd498\" (UniqueName: \"kubernetes.io/projected/3696e6e7-3920-42fc-8846-f47bfe1ff906-kube-api-access-zd498\") on node \"crc\" DevicePath \"\"" Mar 13 20:42:18 crc kubenswrapper[5029]: I0313 20:42:18.471055 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fsfc" event={"ID":"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f","Type":"ContainerStarted","Data":"db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd"} Mar 13 20:42:18 crc kubenswrapper[5029]: I0313 20:42:18.475091 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" event={"ID":"3696e6e7-3920-42fc-8846-f47bfe1ff906","Type":"ContainerDied","Data":"f9273fa9361533a339b22cf4f49f555b4cb5e40ee273bca8b0bb92693aba70db"} Mar 13 20:42:18 crc kubenswrapper[5029]: I0313 20:42:18.475119 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9273fa9361533a339b22cf4f49f555b4cb5e40ee273bca8b0bb92693aba70db" Mar 13 20:42:18 crc kubenswrapper[5029]: I0313 20:42:18.475175 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59" Mar 13 20:42:18 crc kubenswrapper[5029]: I0313 20:42:18.496162 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5fsfc" podStartSLOduration=2.009738815 podStartE2EDuration="4.496133333s" podCreationTimestamp="2026-03-13 20:42:14 +0000 UTC" firstStartedPulling="2026-03-13 20:42:15.44065949 +0000 UTC m=+895.456741893" lastFinishedPulling="2026-03-13 20:42:17.927054008 +0000 UTC m=+897.943136411" observedRunningTime="2026-03-13 20:42:18.49532451 +0000 UTC m=+898.511406923" watchObservedRunningTime="2026-03-13 20:42:18.496133333 +0000 UTC m=+898.512215736" Mar 13 20:42:21 crc kubenswrapper[5029]: I0313 20:42:21.286124 5029 scope.go:117] "RemoveContainer" containerID="9d2449622bd8dad48704677f7227479939088aa1ce67dafff5ace52e0184bcab" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.692551 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx"] Mar 13 20:42:22 crc kubenswrapper[5029]: E0313 20:42:22.693045 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerName="util" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.693056 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerName="util" Mar 13 20:42:22 crc kubenswrapper[5029]: E0313 20:42:22.693071 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerName="pull" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.693077 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerName="pull" Mar 13 20:42:22 crc kubenswrapper[5029]: E0313 20:42:22.693088 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerName="extract" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.693094 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerName="extract" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.693194 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="3696e6e7-3920-42fc-8846-f47bfe1ff906" containerName="extract" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.693613 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.695526 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.695917 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-v29c9" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.696118 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.712419 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx"] Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.838696 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnqg7\" (UniqueName: \"kubernetes.io/projected/e8119630-7aa1-4ab3-a38c-de26de2185d3-kube-api-access-bnqg7\") pod \"nmstate-operator-796d4cfff4-f6kxx\" (UID: \"e8119630-7aa1-4ab3-a38c-de26de2185d3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.940512 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnqg7\" (UniqueName: \"kubernetes.io/projected/e8119630-7aa1-4ab3-a38c-de26de2185d3-kube-api-access-bnqg7\") pod \"nmstate-operator-796d4cfff4-f6kxx\" (UID: \"e8119630-7aa1-4ab3-a38c-de26de2185d3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx" Mar 13 20:42:22 crc kubenswrapper[5029]: I0313 20:42:22.962437 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnqg7\" (UniqueName: \"kubernetes.io/projected/e8119630-7aa1-4ab3-a38c-de26de2185d3-kube-api-access-bnqg7\") pod \"nmstate-operator-796d4cfff4-f6kxx\" (UID: \"e8119630-7aa1-4ab3-a38c-de26de2185d3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx" Mar 13 20:42:23 crc kubenswrapper[5029]: I0313 20:42:23.009059 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx" Mar 13 20:42:23 crc kubenswrapper[5029]: I0313 20:42:23.226505 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx"] Mar 13 20:42:23 crc kubenswrapper[5029]: W0313 20:42:23.234241 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8119630_7aa1_4ab3_a38c_de26de2185d3.slice/crio-9cc7eb9badfcd50ae520949220427a329e88d147f2a1cfcfa8e53ecd66242b82 WatchSource:0}: Error finding container 9cc7eb9badfcd50ae520949220427a329e88d147f2a1cfcfa8e53ecd66242b82: Status 404 returned error can't find the container with id 9cc7eb9badfcd50ae520949220427a329e88d147f2a1cfcfa8e53ecd66242b82 Mar 13 20:42:23 crc kubenswrapper[5029]: I0313 20:42:23.501099 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx" event={"ID":"e8119630-7aa1-4ab3-a38c-de26de2185d3","Type":"ContainerStarted","Data":"9cc7eb9badfcd50ae520949220427a329e88d147f2a1cfcfa8e53ecd66242b82"} Mar 13 20:42:24 crc kubenswrapper[5029]: I0313 20:42:24.564529 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:24 crc kubenswrapper[5029]: I0313 20:42:24.564823 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:24 crc kubenswrapper[5029]: I0313 20:42:24.606282 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:25 crc kubenswrapper[5029]: I0313 20:42:25.514633 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx" event={"ID":"e8119630-7aa1-4ab3-a38c-de26de2185d3","Type":"ContainerStarted","Data":"363c6cb7e9318d6f3d2788660402d3d5e0bdb669bdfcc9891a8ef3653e8aca38"} Mar 13 20:42:25 crc kubenswrapper[5029]: I0313 20:42:25.531832 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-f6kxx" podStartSLOduration=1.38868825 podStartE2EDuration="3.531811901s" podCreationTimestamp="2026-03-13 20:42:22 +0000 UTC" firstStartedPulling="2026-03-13 20:42:23.236844629 +0000 UTC m=+903.252927032" lastFinishedPulling="2026-03-13 20:42:25.37996828 +0000 UTC m=+905.396050683" observedRunningTime="2026-03-13 20:42:25.529486947 +0000 UTC m=+905.545569380" watchObservedRunningTime="2026-03-13 20:42:25.531811901 +0000 UTC m=+905.547894324" Mar 13 20:42:25 crc kubenswrapper[5029]: I0313 20:42:25.559593 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:27 crc kubenswrapper[5029]: I0313 20:42:27.198791 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5fsfc"] Mar 13 20:42:27 crc kubenswrapper[5029]: I0313 20:42:27.524441 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5fsfc" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerName="registry-server" containerID="cri-o://db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd" gracePeriod=2 Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.497280 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.533107 5029 generic.go:334] "Generic (PLEG): container finished" podID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerID="db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd" exitCode=0 Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.533161 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fsfc" event={"ID":"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f","Type":"ContainerDied","Data":"db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd"} Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.533195 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fsfc" event={"ID":"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f","Type":"ContainerDied","Data":"b6d5be9fa21d38d989979b837e29f56884a10c16ea36b9c883dd2b43e97acaa6"} Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.533217 5029 scope.go:117] "RemoveContainer" containerID="db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.533344 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fsfc" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.576699 5029 scope.go:117] "RemoveContainer" containerID="5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.632519 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfx6q\" (UniqueName: \"kubernetes.io/projected/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-kube-api-access-vfx6q\") pod \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.633082 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-catalog-content\") pod \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.633228 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-utilities\") pod \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\" (UID: \"2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f\") " Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.634053 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-utilities" (OuterVolumeSpecName: "utilities") pod "2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" (UID: "2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.639129 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-kube-api-access-vfx6q" (OuterVolumeSpecName: "kube-api-access-vfx6q") pod "2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" (UID: "2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f"). InnerVolumeSpecName "kube-api-access-vfx6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.717725 5029 scope.go:117] "RemoveContainer" containerID="d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.731815 5029 scope.go:117] "RemoveContainer" containerID="db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd" Mar 13 20:42:28 crc kubenswrapper[5029]: E0313 20:42:28.732288 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd\": container with ID starting with db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd not found: ID does not exist" containerID="db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.732326 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd"} err="failed to get container status \"db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd\": rpc error: code = NotFound desc = could not find container \"db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd\": container with ID starting with db6f29fe35414de65e1cead3b9b386bc1d29816aef3d5973ddff7fff82efbdcd not found: ID does not exist" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.732354 5029 scope.go:117] "RemoveContainer" containerID="5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2" Mar 13 20:42:28 crc kubenswrapper[5029]: E0313 20:42:28.732670 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2\": container with ID starting with 5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2 not found: ID does not exist" containerID="5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.732707 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2"} err="failed to get container status \"5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2\": rpc error: code = NotFound desc = could not find container \"5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2\": container with ID starting with 5de0076a90eaefac90520135083f1c33d0428f73c8a66937fe63f072a7760bb2 not found: ID does not exist" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.732737 5029 scope.go:117] "RemoveContainer" containerID="d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb" Mar 13 20:42:28 crc kubenswrapper[5029]: E0313 20:42:28.733123 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb\": container with ID starting with d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb not found: ID does not exist" containerID="d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.733162 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb"} err="failed to get container status \"d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb\": rpc error: code = NotFound desc = could not find container \"d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb\": container with ID starting with d9840a0251595abd718defce29d979aba91dec2ecd4f65fb47dbee7f6332d7cb not found: ID does not exist" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.735151 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.735185 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfx6q\" (UniqueName: \"kubernetes.io/projected/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-kube-api-access-vfx6q\") on node \"crc\" DevicePath \"\"" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.760971 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" (UID: "2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.836827 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.859974 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5fsfc"] Mar 13 20:42:28 crc kubenswrapper[5029]: I0313 20:42:28.864833 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5fsfc"] Mar 13 20:42:30 crc kubenswrapper[5029]: I0313 20:42:30.607433 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" path="/var/lib/kubelet/pods/2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f/volumes" Mar 13 20:42:31 crc kubenswrapper[5029]: I0313 20:42:31.949791 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:42:31 crc kubenswrapper[5029]: I0313 20:42:31.950121 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.270523 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg"] Mar 13 20:42:32 crc kubenswrapper[5029]: E0313 20:42:32.270798 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerName="registry-server" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.270819 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerName="registry-server" Mar 13 20:42:32 crc kubenswrapper[5029]: E0313 20:42:32.270838 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerName="extract-utilities" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.270863 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerName="extract-utilities" Mar 13 20:42:32 crc kubenswrapper[5029]: E0313 20:42:32.270882 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerName="extract-content" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.270889 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerName="extract-content" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.271018 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a84dcaa-7bc5-4042-b5ab-d46b812fbd7f" containerName="registry-server" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.271680 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.274951 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-zsg8s" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.300409 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg"] Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.328611 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w"] Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.330177 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.358926 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.359709 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w"] Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.407365 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flh6m\" (UniqueName: \"kubernetes.io/projected/d5ec24be-1999-4337-961a-aa0fe51a903a-kube-api-access-flh6m\") pod \"nmstate-metrics-9b8c8685d-bs5gg\" (UID: \"d5ec24be-1999-4337-961a-aa0fe51a903a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.417926 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kfjhn"] Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.419761 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.488798 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn"] Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.489808 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.495772 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.495896 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.495962 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-p7x8p" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.500617 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn"] Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.510946 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngc99\" (UniqueName: \"kubernetes.io/projected/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-kube-api-access-ngc99\") pod \"nmstate-webhook-5f558f5558-nxc2w\" (UID: \"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.511020 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flh6m\" (UniqueName: \"kubernetes.io/projected/d5ec24be-1999-4337-961a-aa0fe51a903a-kube-api-access-flh6m\") pod \"nmstate-metrics-9b8c8685d-bs5gg\" (UID: \"d5ec24be-1999-4337-961a-aa0fe51a903a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.511070 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-nxc2w\" (UID: \"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.530112 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flh6m\" (UniqueName: \"kubernetes.io/projected/d5ec24be-1999-4337-961a-aa0fe51a903a-kube-api-access-flh6m\") pod \"nmstate-metrics-9b8c8685d-bs5gg\" (UID: \"d5ec24be-1999-4337-961a-aa0fe51a903a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.588481 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.614250 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngc99\" (UniqueName: \"kubernetes.io/projected/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-kube-api-access-ngc99\") pod \"nmstate-webhook-5f558f5558-nxc2w\" (UID: \"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.614347 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-nxc2w\" (UID: \"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.614385 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.614412 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7lkn\" (UniqueName: \"kubernetes.io/projected/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-kube-api-access-r7lkn\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.614487 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-dbus-socket\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.614516 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: E0313 20:42:32.614549 5029 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 13 20:42:32 crc kubenswrapper[5029]: E0313 20:42:32.614700 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-tls-key-pair podName:2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488 nodeName:}" failed. No retries permitted until 2026-03-13 20:42:33.114680062 +0000 UTC m=+913.130762465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-tls-key-pair") pod "nmstate-webhook-5f558f5558-nxc2w" (UID: "2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488") : secret "openshift-nmstate-webhook" not found Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.614773 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-ovs-socket\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.615319 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gzs9\" (UniqueName: \"kubernetes.io/projected/10c1789d-86d9-4de6-a518-80129bc65d08-kube-api-access-6gzs9\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.615375 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-nmstate-lock\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.637360 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngc99\" (UniqueName: \"kubernetes.io/projected/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-kube-api-access-ngc99\") pod \"nmstate-webhook-5f558f5558-nxc2w\" (UID: \"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.716625 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.717032 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7lkn\" (UniqueName: \"kubernetes.io/projected/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-kube-api-access-r7lkn\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.717102 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-dbus-socket\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.717129 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.717155 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-ovs-socket\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.717176 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gzs9\" (UniqueName: \"kubernetes.io/projected/10c1789d-86d9-4de6-a518-80129bc65d08-kube-api-access-6gzs9\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.717202 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-nmstate-lock\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.718024 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5f9969ccd4-gb9hk"] Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.718180 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.718556 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-nmstate-lock\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: E0313 20:42:32.718604 5029 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 13 20:42:32 crc kubenswrapper[5029]: E0313 20:42:32.718655 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-plugin-serving-cert podName:ad29b302-2f20-4bf5-bd5f-c40ac11bebf4 nodeName:}" failed. No retries permitted until 2026-03-13 20:42:33.218638375 +0000 UTC m=+913.234720838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-6rvrn" (UID: "ad29b302-2f20-4bf5-bd5f-c40ac11bebf4") : secret "plugin-serving-cert" not found Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.719050 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-ovs-socket\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.719136 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/10c1789d-86d9-4de6-a518-80129bc65d08-dbus-socket\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.719284 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.733722 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f9969ccd4-gb9hk"] Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.743684 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7lkn\" (UniqueName: \"kubernetes.io/projected/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-kube-api-access-r7lkn\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.759769 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gzs9\" (UniqueName: \"kubernetes.io/projected/10c1789d-86d9-4de6-a518-80129bc65d08-kube-api-access-6gzs9\") pod \"nmstate-handler-kfjhn\" (UID: \"10c1789d-86d9-4de6-a518-80129bc65d08\") " pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.818060 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-serving-cert\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.818104 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-oauth-serving-cert\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.818123 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-trusted-ca-bundle\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.818257 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6pr8\" (UniqueName: \"kubernetes.io/projected/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-kube-api-access-d6pr8\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.818342 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-config\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.818373 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-oauth-config\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.818392 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-service-ca\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.919364 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-serving-cert\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.919421 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-oauth-serving-cert\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.919442 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-trusted-ca-bundle\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.919482 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6pr8\" (UniqueName: \"kubernetes.io/projected/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-kube-api-access-d6pr8\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.919531 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-config\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.919556 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-oauth-config\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.919573 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-service-ca\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.920495 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-service-ca\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.920605 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-oauth-serving-cert\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.920815 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-config\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.922163 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-trusted-ca-bundle\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.922577 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-serving-cert\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.923307 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-console-oauth-config\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:32 crc kubenswrapper[5029]: I0313 20:42:32.936559 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6pr8\" (UniqueName: \"kubernetes.io/projected/9bdb51c8-c10d-45b3-aff3-b85bb49bdc58-kube-api-access-d6pr8\") pod \"console-5f9969ccd4-gb9hk\" (UID: \"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58\") " pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.041030 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.050707 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg"] Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.071980 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.121430 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-nxc2w\" (UID: \"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.127902 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-nxc2w\" (UID: \"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.229247 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.233559 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad29b302-2f20-4bf5-bd5f-c40ac11bebf4-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6rvrn\" (UID: \"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.246625 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f9969ccd4-gb9hk"] Mar 13 20:42:33 crc kubenswrapper[5029]: W0313 20:42:33.251048 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bdb51c8_c10d_45b3_aff3_b85bb49bdc58.slice/crio-a8e288f08db9f1ff2cd1280f32764acfb357ba18880d08817d4d2f824065952f WatchSource:0}: Error finding container a8e288f08db9f1ff2cd1280f32764acfb357ba18880d08817d4d2f824065952f: Status 404 returned error can't find the container with id a8e288f08db9f1ff2cd1280f32764acfb357ba18880d08817d4d2f824065952f Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.259778 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.406792 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.478401 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w"] Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.568134 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" event={"ID":"d5ec24be-1999-4337-961a-aa0fe51a903a","Type":"ContainerStarted","Data":"1dceb8fde8db1612d8bb5d243df1a9e3a8c427287dd3137cb1e76125d7313c2d"} Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.569534 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kfjhn" event={"ID":"10c1789d-86d9-4de6-a518-80129bc65d08","Type":"ContainerStarted","Data":"355d9c293a2f256b24a5da5c61f7fe23b327789534ea9a8392e020dce173fa71"} Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.572589 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f9969ccd4-gb9hk" event={"ID":"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58","Type":"ContainerStarted","Data":"4126e1a7eb515c8e33d097f08ee15fa0201436908cea2fe63aced5afeaca0d73"} Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.572656 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f9969ccd4-gb9hk" event={"ID":"9bdb51c8-c10d-45b3-aff3-b85bb49bdc58","Type":"ContainerStarted","Data":"a8e288f08db9f1ff2cd1280f32764acfb357ba18880d08817d4d2f824065952f"} Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.574791 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" event={"ID":"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488","Type":"ContainerStarted","Data":"423535affe5951c0b4c41d13d5918c82740f1a199844c0a014c62540f9a30b50"} Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.601105 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5f9969ccd4-gb9hk" podStartSLOduration=1.601072708 podStartE2EDuration="1.601072708s" podCreationTimestamp="2026-03-13 20:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:42:33.591788696 +0000 UTC m=+913.607871099" watchObservedRunningTime="2026-03-13 20:42:33.601072708 +0000 UTC m=+913.617155111" Mar 13 20:42:33 crc kubenswrapper[5029]: I0313 20:42:33.657545 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn"] Mar 13 20:42:33 crc kubenswrapper[5029]: W0313 20:42:33.658061 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad29b302_2f20_4bf5_bd5f_c40ac11bebf4.slice/crio-3a3a0e458e217df55d10e020d7297eb4063320f86f9b1036247cdda02855d4cd WatchSource:0}: Error finding container 3a3a0e458e217df55d10e020d7297eb4063320f86f9b1036247cdda02855d4cd: Status 404 returned error can't find the container with id 3a3a0e458e217df55d10e020d7297eb4063320f86f9b1036247cdda02855d4cd Mar 13 20:42:34 crc kubenswrapper[5029]: I0313 20:42:34.582527 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" event={"ID":"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4","Type":"ContainerStarted","Data":"3a3a0e458e217df55d10e020d7297eb4063320f86f9b1036247cdda02855d4cd"} Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.597739 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" event={"ID":"d5ec24be-1999-4337-961a-aa0fe51a903a","Type":"ContainerStarted","Data":"9c4b266a9a7c898575f414e43b9cebc0aeea2a53fc64215b791e2b53558aa562"} Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.610151 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kfjhn" event={"ID":"10c1789d-86d9-4de6-a518-80129bc65d08","Type":"ContainerStarted","Data":"8513a425e80492a4cc53dcbf4837c8f3488c9dd850c621517189bc302229d3aa"} Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.610187 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.610201 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.610212 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" event={"ID":"2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488","Type":"ContainerStarted","Data":"59802f8a60e22e297a239bea244c77aa50c045fbc3442993862d8c1dfcf9a8f0"} Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.610231 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" event={"ID":"ad29b302-2f20-4bf5-bd5f-c40ac11bebf4","Type":"ContainerStarted","Data":"9f07d9fe41c79de27e9aae871467b0b78ef4991fd74ae6f56a58b8b9dcbe2918"} Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.614787 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kfjhn" podStartSLOduration=1.723682396 podStartE2EDuration="4.614775043s" podCreationTimestamp="2026-03-13 20:42:32 +0000 UTC" firstStartedPulling="2026-03-13 20:42:33.065031207 +0000 UTC m=+913.081113610" lastFinishedPulling="2026-03-13 20:42:35.956123854 +0000 UTC m=+915.972206257" observedRunningTime="2026-03-13 20:42:36.613794786 +0000 UTC m=+916.629877189" watchObservedRunningTime="2026-03-13 20:42:36.614775043 +0000 UTC m=+916.630857446" Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.631656 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" podStartSLOduration=2.184797043 podStartE2EDuration="4.63163591s" podCreationTimestamp="2026-03-13 20:42:32 +0000 UTC" firstStartedPulling="2026-03-13 20:42:33.497332083 +0000 UTC m=+913.513414486" lastFinishedPulling="2026-03-13 20:42:35.94417094 +0000 UTC m=+915.960253353" observedRunningTime="2026-03-13 20:42:36.629116452 +0000 UTC m=+916.645198855" watchObservedRunningTime="2026-03-13 20:42:36.63163591 +0000 UTC m=+916.647718313" Mar 13 20:42:36 crc kubenswrapper[5029]: I0313 20:42:36.695101 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6rvrn" podStartSLOduration=2.4118445250000002 podStartE2EDuration="4.695087513s" podCreationTimestamp="2026-03-13 20:42:32 +0000 UTC" firstStartedPulling="2026-03-13 20:42:33.66046215 +0000 UTC m=+913.676544553" lastFinishedPulling="2026-03-13 20:42:35.943705138 +0000 UTC m=+915.959787541" observedRunningTime="2026-03-13 20:42:36.693040727 +0000 UTC m=+916.709123130" watchObservedRunningTime="2026-03-13 20:42:36.695087513 +0000 UTC m=+916.711169906" Mar 13 20:42:38 crc kubenswrapper[5029]: I0313 20:42:38.622993 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" event={"ID":"d5ec24be-1999-4337-961a-aa0fe51a903a","Type":"ContainerStarted","Data":"740c71d6b6566866f1b70ef102b37990ee4e66b3937cd3aed860236796c23104"} Mar 13 20:42:38 crc kubenswrapper[5029]: I0313 20:42:38.639465 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bs5gg" podStartSLOduration=1.293343046 podStartE2EDuration="6.639441352s" podCreationTimestamp="2026-03-13 20:42:32 +0000 UTC" firstStartedPulling="2026-03-13 20:42:33.063158157 +0000 UTC m=+913.079240560" lastFinishedPulling="2026-03-13 20:42:38.409256463 +0000 UTC m=+918.425338866" observedRunningTime="2026-03-13 20:42:38.634835786 +0000 UTC m=+918.650918189" watchObservedRunningTime="2026-03-13 20:42:38.639441352 +0000 UTC m=+918.655523755" Mar 13 20:42:43 crc kubenswrapper[5029]: I0313 20:42:43.072012 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kfjhn" Mar 13 20:42:43 crc kubenswrapper[5029]: I0313 20:42:43.072419 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:43 crc kubenswrapper[5029]: I0313 20:42:43.072448 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:43 crc kubenswrapper[5029]: I0313 20:42:43.079363 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:43 crc kubenswrapper[5029]: I0313 20:42:43.666298 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5f9969ccd4-gb9hk" Mar 13 20:42:43 crc kubenswrapper[5029]: I0313 20:42:43.746431 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rvlhd"] Mar 13 20:42:53 crc kubenswrapper[5029]: I0313 20:42:53.265501 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nxc2w" Mar 13 20:43:01 crc kubenswrapper[5029]: I0313 20:43:01.950238 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:43:01 crc kubenswrapper[5029]: I0313 20:43:01.950786 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:43:01 crc kubenswrapper[5029]: I0313 20:43:01.950828 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:43:01 crc kubenswrapper[5029]: I0313 20:43:01.951312 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bbea3ecaf26f1609521229697004331cac38ad489818c6871ecf93d481648d2"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 20:43:01 crc kubenswrapper[5029]: I0313 20:43:01.951361 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://4bbea3ecaf26f1609521229697004331cac38ad489818c6871ecf93d481648d2" gracePeriod=600 Mar 13 20:43:02 crc kubenswrapper[5029]: I0313 20:43:02.808628 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="4bbea3ecaf26f1609521229697004331cac38ad489818c6871ecf93d481648d2" exitCode=0 Mar 13 20:43:02 crc kubenswrapper[5029]: I0313 20:43:02.808692 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"4bbea3ecaf26f1609521229697004331cac38ad489818c6871ecf93d481648d2"} Mar 13 20:43:02 crc kubenswrapper[5029]: I0313 20:43:02.809201 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"098cf3f8300a8686d628684223c880e3efcc22b58099225528ac37cb2f271026"} Mar 13 20:43:02 crc kubenswrapper[5029]: I0313 20:43:02.809239 5029 scope.go:117] "RemoveContainer" containerID="f8fcc9f784c6978226030105fcd2101ebdcc99b3d39948d8d2fe198f91727390" Mar 13 20:43:05 crc kubenswrapper[5029]: I0313 20:43:05.904706 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d"] Mar 13 20:43:05 crc kubenswrapper[5029]: I0313 20:43:05.907908 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:05 crc kubenswrapper[5029]: I0313 20:43:05.910158 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 13 20:43:05 crc kubenswrapper[5029]: I0313 20:43:05.916694 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d"] Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.014906 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.015097 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9pkf\" (UniqueName: \"kubernetes.io/projected/53744549-0d0b-409a-a51c-67a6f8df65d5-kube-api-access-f9pkf\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.015150 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.116500 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.117043 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9pkf\" (UniqueName: \"kubernetes.io/projected/53744549-0d0b-409a-a51c-67a6f8df65d5-kube-api-access-f9pkf\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.117077 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.117113 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.117372 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.140702 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9pkf\" (UniqueName: \"kubernetes.io/projected/53744549-0d0b-409a-a51c-67a6f8df65d5-kube-api-access-f9pkf\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.231484 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.472554 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d"] Mar 13 20:43:06 crc kubenswrapper[5029]: W0313 20:43:06.481783 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53744549_0d0b_409a_a51c_67a6f8df65d5.slice/crio-f69a38556059ad83702939e6b85498007c9ac21347676645dfeb8e9170c4e34d WatchSource:0}: Error finding container f69a38556059ad83702939e6b85498007c9ac21347676645dfeb8e9170c4e34d: Status 404 returned error can't find the container with id f69a38556059ad83702939e6b85498007c9ac21347676645dfeb8e9170c4e34d Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.835935 5029 generic.go:334] "Generic (PLEG): container finished" podID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerID="63a88e6d11094f458724ca8b4aebc342198e5c26ecff10dcb478625e7269f859" exitCode=0 Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.836026 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" event={"ID":"53744549-0d0b-409a-a51c-67a6f8df65d5","Type":"ContainerDied","Data":"63a88e6d11094f458724ca8b4aebc342198e5c26ecff10dcb478625e7269f859"} Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.837201 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" event={"ID":"53744549-0d0b-409a-a51c-67a6f8df65d5","Type":"ContainerStarted","Data":"f69a38556059ad83702939e6b85498007c9ac21347676645dfeb8e9170c4e34d"} Mar 13 20:43:06 crc kubenswrapper[5029]: I0313 20:43:06.838369 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 20:43:08 crc kubenswrapper[5029]: I0313 20:43:08.798843 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rvlhd" podUID="38ba7d36-baaf-4e14-aa8e-5236ee9500de" containerName="console" containerID="cri-o://0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4" gracePeriod=15 Mar 13 20:43:08 crc kubenswrapper[5029]: I0313 20:43:08.850959 5029 generic.go:334] "Generic (PLEG): container finished" podID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerID="7c2083f1bae8af6d2d07c050623ab28937cfa492617424b28cc715a7d675e2e7" exitCode=0 Mar 13 20:43:08 crc kubenswrapper[5029]: I0313 20:43:08.851053 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" event={"ID":"53744549-0d0b-409a-a51c-67a6f8df65d5","Type":"ContainerDied","Data":"7c2083f1bae8af6d2d07c050623ab28937cfa492617424b28cc715a7d675e2e7"} Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.215898 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rvlhd_38ba7d36-baaf-4e14-aa8e-5236ee9500de/console/0.log" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.216436 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.262647 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-service-ca\") pod \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.262959 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-serving-cert\") pod \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.263110 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfhd5\" (UniqueName: \"kubernetes.io/projected/38ba7d36-baaf-4e14-aa8e-5236ee9500de-kube-api-access-qfhd5\") pod \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.263212 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-oauth-serving-cert\") pod \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.263326 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-config\") pod \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.263411 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-oauth-config\") pod \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.263504 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-trusted-ca-bundle\") pod \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\" (UID: \"38ba7d36-baaf-4e14-aa8e-5236ee9500de\") " Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.263896 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "38ba7d36-baaf-4e14-aa8e-5236ee9500de" (UID: "38ba7d36-baaf-4e14-aa8e-5236ee9500de"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.263914 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-config" (OuterVolumeSpecName: "console-config") pod "38ba7d36-baaf-4e14-aa8e-5236ee9500de" (UID: "38ba7d36-baaf-4e14-aa8e-5236ee9500de"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.264107 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "38ba7d36-baaf-4e14-aa8e-5236ee9500de" (UID: "38ba7d36-baaf-4e14-aa8e-5236ee9500de"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.264131 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-service-ca" (OuterVolumeSpecName: "service-ca") pod "38ba7d36-baaf-4e14-aa8e-5236ee9500de" (UID: "38ba7d36-baaf-4e14-aa8e-5236ee9500de"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.269297 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "38ba7d36-baaf-4e14-aa8e-5236ee9500de" (UID: "38ba7d36-baaf-4e14-aa8e-5236ee9500de"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.269364 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ba7d36-baaf-4e14-aa8e-5236ee9500de-kube-api-access-qfhd5" (OuterVolumeSpecName: "kube-api-access-qfhd5") pod "38ba7d36-baaf-4e14-aa8e-5236ee9500de" (UID: "38ba7d36-baaf-4e14-aa8e-5236ee9500de"). InnerVolumeSpecName "kube-api-access-qfhd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.269561 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "38ba7d36-baaf-4e14-aa8e-5236ee9500de" (UID: "38ba7d36-baaf-4e14-aa8e-5236ee9500de"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.365080 5029 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-service-ca\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.365418 5029 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.365496 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfhd5\" (UniqueName: \"kubernetes.io/projected/38ba7d36-baaf-4e14-aa8e-5236ee9500de-kube-api-access-qfhd5\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.365550 5029 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.365602 5029 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.365653 5029 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38ba7d36-baaf-4e14-aa8e-5236ee9500de-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.365711 5029 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ba7d36-baaf-4e14-aa8e-5236ee9500de-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.860612 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rvlhd_38ba7d36-baaf-4e14-aa8e-5236ee9500de/console/0.log" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.860655 5029 generic.go:334] "Generic (PLEG): container finished" podID="38ba7d36-baaf-4e14-aa8e-5236ee9500de" containerID="0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4" exitCode=2 Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.860735 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rvlhd" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.861970 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rvlhd" event={"ID":"38ba7d36-baaf-4e14-aa8e-5236ee9500de","Type":"ContainerDied","Data":"0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4"} Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.862019 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rvlhd" event={"ID":"38ba7d36-baaf-4e14-aa8e-5236ee9500de","Type":"ContainerDied","Data":"e24525bc8ca3304d6e55337e1af0ff2f8d2b7b55fd479a58d1e572b16ce9caa6"} Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.862040 5029 scope.go:117] "RemoveContainer" containerID="0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.869461 5029 generic.go:334] "Generic (PLEG): container finished" podID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerID="d7d04567e8d15892a92cf599e8f5d748254ab5c7a566e1a068afb9c57af5b578" exitCode=0 Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.869522 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" event={"ID":"53744549-0d0b-409a-a51c-67a6f8df65d5","Type":"ContainerDied","Data":"d7d04567e8d15892a92cf599e8f5d748254ab5c7a566e1a068afb9c57af5b578"} Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.888219 5029 scope.go:117] "RemoveContainer" containerID="0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4" Mar 13 20:43:09 crc kubenswrapper[5029]: E0313 20:43:09.888950 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4\": container with ID starting with 0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4 not found: ID does not exist" containerID="0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.889001 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4"} err="failed to get container status \"0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4\": rpc error: code = NotFound desc = could not find container \"0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4\": container with ID starting with 0b793540779f3bcc5e2e07fa3c9c874a6a353fbe90fd44594845718d907347b4 not found: ID does not exist" Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.914975 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rvlhd"] Mar 13 20:43:09 crc kubenswrapper[5029]: I0313 20:43:09.918472 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rvlhd"] Mar 13 20:43:10 crc kubenswrapper[5029]: I0313 20:43:10.608603 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38ba7d36-baaf-4e14-aa8e-5236ee9500de" path="/var/lib/kubelet/pods/38ba7d36-baaf-4e14-aa8e-5236ee9500de/volumes" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.102175 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.189638 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9pkf\" (UniqueName: \"kubernetes.io/projected/53744549-0d0b-409a-a51c-67a6f8df65d5-kube-api-access-f9pkf\") pod \"53744549-0d0b-409a-a51c-67a6f8df65d5\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.189905 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-util\") pod \"53744549-0d0b-409a-a51c-67a6f8df65d5\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.189957 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-bundle\") pod \"53744549-0d0b-409a-a51c-67a6f8df65d5\" (UID: \"53744549-0d0b-409a-a51c-67a6f8df65d5\") " Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.190865 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-bundle" (OuterVolumeSpecName: "bundle") pod "53744549-0d0b-409a-a51c-67a6f8df65d5" (UID: "53744549-0d0b-409a-a51c-67a6f8df65d5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.194876 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53744549-0d0b-409a-a51c-67a6f8df65d5-kube-api-access-f9pkf" (OuterVolumeSpecName: "kube-api-access-f9pkf") pod "53744549-0d0b-409a-a51c-67a6f8df65d5" (UID: "53744549-0d0b-409a-a51c-67a6f8df65d5"). InnerVolumeSpecName "kube-api-access-f9pkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.208731 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-util" (OuterVolumeSpecName: "util") pod "53744549-0d0b-409a-a51c-67a6f8df65d5" (UID: "53744549-0d0b-409a-a51c-67a6f8df65d5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.292180 5029 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-util\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.292223 5029 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53744549-0d0b-409a-a51c-67a6f8df65d5-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.292236 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9pkf\" (UniqueName: \"kubernetes.io/projected/53744549-0d0b-409a-a51c-67a6f8df65d5-kube-api-access-f9pkf\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.884272 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" event={"ID":"53744549-0d0b-409a-a51c-67a6f8df65d5","Type":"ContainerDied","Data":"f69a38556059ad83702939e6b85498007c9ac21347676645dfeb8e9170c4e34d"} Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.884320 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69a38556059ad83702939e6b85498007c9ac21347676645dfeb8e9170c4e34d" Mar 13 20:43:11 crc kubenswrapper[5029]: I0313 20:43:11.884378 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.931165 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm"] Mar 13 20:43:20 crc kubenswrapper[5029]: E0313 20:43:20.932930 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ba7d36-baaf-4e14-aa8e-5236ee9500de" containerName="console" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.933162 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ba7d36-baaf-4e14-aa8e-5236ee9500de" containerName="console" Mar 13 20:43:20 crc kubenswrapper[5029]: E0313 20:43:20.933178 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerName="util" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.933185 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerName="util" Mar 13 20:43:20 crc kubenswrapper[5029]: E0313 20:43:20.933216 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerName="pull" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.933223 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerName="pull" Mar 13 20:43:20 crc kubenswrapper[5029]: E0313 20:43:20.933261 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerName="extract" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.933268 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerName="extract" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.933748 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ba7d36-baaf-4e14-aa8e-5236ee9500de" containerName="console" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.933762 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="53744549-0d0b-409a-a51c-67a6f8df65d5" containerName="extract" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.934359 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.941879 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.942208 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.942415 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.942610 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-txmxd" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.942764 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 13 20:43:20 crc kubenswrapper[5029]: I0313 20:43:20.952176 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm"] Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.029529 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b95a923-5775-4f8c-95aa-be566bc0d78c-apiservice-cert\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.029608 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55pz4\" (UniqueName: \"kubernetes.io/projected/3b95a923-5775-4f8c-95aa-be566bc0d78c-kube-api-access-55pz4\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.029637 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b95a923-5775-4f8c-95aa-be566bc0d78c-webhook-cert\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.130645 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b95a923-5775-4f8c-95aa-be566bc0d78c-apiservice-cert\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.130708 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55pz4\" (UniqueName: \"kubernetes.io/projected/3b95a923-5775-4f8c-95aa-be566bc0d78c-kube-api-access-55pz4\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.130733 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b95a923-5775-4f8c-95aa-be566bc0d78c-webhook-cert\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.138285 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b95a923-5775-4f8c-95aa-be566bc0d78c-webhook-cert\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.141932 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b95a923-5775-4f8c-95aa-be566bc0d78c-apiservice-cert\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.163397 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55pz4\" (UniqueName: \"kubernetes.io/projected/3b95a923-5775-4f8c-95aa-be566bc0d78c-kube-api-access-55pz4\") pod \"metallb-operator-controller-manager-b55d4cdb9-s58fm\" (UID: \"3b95a923-5775-4f8c-95aa-be566bc0d78c\") " pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.255355 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.290123 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt"] Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.292741 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.296123 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.296184 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.296644 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-js5nt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.324354 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt"] Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.336784 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30e521ab-6234-4e1a-9036-7c709e06c9b1-apiservice-cert\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.336953 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mx2k\" (UniqueName: \"kubernetes.io/projected/30e521ab-6234-4e1a-9036-7c709e06c9b1-kube-api-access-8mx2k\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.337003 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30e521ab-6234-4e1a-9036-7c709e06c9b1-webhook-cert\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.442560 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mx2k\" (UniqueName: \"kubernetes.io/projected/30e521ab-6234-4e1a-9036-7c709e06c9b1-kube-api-access-8mx2k\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.442631 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30e521ab-6234-4e1a-9036-7c709e06c9b1-webhook-cert\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.442686 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30e521ab-6234-4e1a-9036-7c709e06c9b1-apiservice-cert\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.456691 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30e521ab-6234-4e1a-9036-7c709e06c9b1-webhook-cert\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.470124 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30e521ab-6234-4e1a-9036-7c709e06c9b1-apiservice-cert\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.513551 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mx2k\" (UniqueName: \"kubernetes.io/projected/30e521ab-6234-4e1a-9036-7c709e06c9b1-kube-api-access-8mx2k\") pod \"metallb-operator-webhook-server-94f7c7558-44tlt\" (UID: \"30e521ab-6234-4e1a-9036-7c709e06c9b1\") " pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.646655 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.893409 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm"] Mar 13 20:43:21 crc kubenswrapper[5029]: W0313 20:43:21.902140 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b95a923_5775_4f8c_95aa_be566bc0d78c.slice/crio-b4d4877402dce3910464e6d6184851cd7b69edd84f9dcbc3bbee5e462268fdce WatchSource:0}: Error finding container b4d4877402dce3910464e6d6184851cd7b69edd84f9dcbc3bbee5e462268fdce: Status 404 returned error can't find the container with id b4d4877402dce3910464e6d6184851cd7b69edd84f9dcbc3bbee5e462268fdce Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.953465 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt"] Mar 13 20:43:21 crc kubenswrapper[5029]: W0313 20:43:21.958823 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30e521ab_6234_4e1a_9036_7c709e06c9b1.slice/crio-a4027f145fbc2ca178b249fc1e42b492c377555d543463c1a300f47b70f784a1 WatchSource:0}: Error finding container a4027f145fbc2ca178b249fc1e42b492c377555d543463c1a300f47b70f784a1: Status 404 returned error can't find the container with id a4027f145fbc2ca178b249fc1e42b492c377555d543463c1a300f47b70f784a1 Mar 13 20:43:21 crc kubenswrapper[5029]: I0313 20:43:21.964094 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" event={"ID":"3b95a923-5775-4f8c-95aa-be566bc0d78c","Type":"ContainerStarted","Data":"b4d4877402dce3910464e6d6184851cd7b69edd84f9dcbc3bbee5e462268fdce"} Mar 13 20:43:22 crc kubenswrapper[5029]: I0313 20:43:22.974362 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" event={"ID":"30e521ab-6234-4e1a-9036-7c709e06c9b1","Type":"ContainerStarted","Data":"a4027f145fbc2ca178b249fc1e42b492c377555d543463c1a300f47b70f784a1"} Mar 13 20:43:28 crc kubenswrapper[5029]: I0313 20:43:28.162534 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" event={"ID":"3b95a923-5775-4f8c-95aa-be566bc0d78c","Type":"ContainerStarted","Data":"e2c3b67697066f156bfc40c072c1807d18240cb3ca63ddd257f2fdff66717b7a"} Mar 13 20:43:28 crc kubenswrapper[5029]: I0313 20:43:28.163414 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:43:28 crc kubenswrapper[5029]: I0313 20:43:28.167827 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" event={"ID":"30e521ab-6234-4e1a-9036-7c709e06c9b1","Type":"ContainerStarted","Data":"94ae15ddf18f7586c2404bb629a60654ad33e8377a210f7ce7e5a738fb29191b"} Mar 13 20:43:28 crc kubenswrapper[5029]: I0313 20:43:28.168006 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:28 crc kubenswrapper[5029]: I0313 20:43:28.197959 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" podStartSLOduration=2.976629148 podStartE2EDuration="8.197941058s" podCreationTimestamp="2026-03-13 20:43:20 +0000 UTC" firstStartedPulling="2026-03-13 20:43:21.909603774 +0000 UTC m=+961.925686177" lastFinishedPulling="2026-03-13 20:43:27.130915684 +0000 UTC m=+967.146998087" observedRunningTime="2026-03-13 20:43:28.192404937 +0000 UTC m=+968.208487340" watchObservedRunningTime="2026-03-13 20:43:28.197941058 +0000 UTC m=+968.214023481" Mar 13 20:43:28 crc kubenswrapper[5029]: I0313 20:43:28.216264 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" podStartSLOduration=2.018215715 podStartE2EDuration="7.216248634s" podCreationTimestamp="2026-03-13 20:43:21 +0000 UTC" firstStartedPulling="2026-03-13 20:43:21.963418914 +0000 UTC m=+961.979501317" lastFinishedPulling="2026-03-13 20:43:27.161451833 +0000 UTC m=+967.177534236" observedRunningTime="2026-03-13 20:43:28.215078413 +0000 UTC m=+968.231160826" watchObservedRunningTime="2026-03-13 20:43:28.216248634 +0000 UTC m=+968.232331037" Mar 13 20:43:41 crc kubenswrapper[5029]: I0313 20:43:41.653744 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-94f7c7558-44tlt" Mar 13 20:43:44 crc kubenswrapper[5029]: I0313 20:43:44.903631 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wknvx"] Mar 13 20:43:44 crc kubenswrapper[5029]: I0313 20:43:44.906029 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:44 crc kubenswrapper[5029]: I0313 20:43:44.925549 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wknvx"] Mar 13 20:43:44 crc kubenswrapper[5029]: I0313 20:43:44.941370 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-catalog-content\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:44 crc kubenswrapper[5029]: I0313 20:43:44.941494 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-utilities\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:44 crc kubenswrapper[5029]: I0313 20:43:44.941830 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k5dn\" (UniqueName: \"kubernetes.io/projected/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-kube-api-access-7k5dn\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:45 crc kubenswrapper[5029]: I0313 20:43:45.043286 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-catalog-content\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:45 crc kubenswrapper[5029]: I0313 20:43:45.043423 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-utilities\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:45 crc kubenswrapper[5029]: I0313 20:43:45.043522 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k5dn\" (UniqueName: \"kubernetes.io/projected/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-kube-api-access-7k5dn\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:45 crc kubenswrapper[5029]: I0313 20:43:45.044185 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-catalog-content\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:45 crc kubenswrapper[5029]: I0313 20:43:45.044217 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-utilities\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:45 crc kubenswrapper[5029]: I0313 20:43:45.067312 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k5dn\" (UniqueName: \"kubernetes.io/projected/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-kube-api-access-7k5dn\") pod \"certified-operators-wknvx\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:45 crc kubenswrapper[5029]: I0313 20:43:45.224870 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:45 crc kubenswrapper[5029]: I0313 20:43:45.495246 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wknvx"] Mar 13 20:43:46 crc kubenswrapper[5029]: I0313 20:43:46.306819 5029 generic.go:334] "Generic (PLEG): container finished" podID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerID="3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8" exitCode=0 Mar 13 20:43:46 crc kubenswrapper[5029]: I0313 20:43:46.306935 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wknvx" event={"ID":"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3","Type":"ContainerDied","Data":"3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8"} Mar 13 20:43:46 crc kubenswrapper[5029]: I0313 20:43:46.307451 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wknvx" event={"ID":"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3","Type":"ContainerStarted","Data":"e6b4e5f12e6e3e585e4d4ea24a0d9fd83cb3f75f1e4074b2ab2056be2c058baf"} Mar 13 20:43:48 crc kubenswrapper[5029]: I0313 20:43:48.327538 5029 generic.go:334] "Generic (PLEG): container finished" podID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerID="f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d" exitCode=0 Mar 13 20:43:48 crc kubenswrapper[5029]: I0313 20:43:48.328122 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wknvx" event={"ID":"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3","Type":"ContainerDied","Data":"f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d"} Mar 13 20:43:49 crc kubenswrapper[5029]: I0313 20:43:49.341420 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wknvx" event={"ID":"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3","Type":"ContainerStarted","Data":"95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab"} Mar 13 20:43:49 crc kubenswrapper[5029]: I0313 20:43:49.368611 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wknvx" podStartSLOduration=2.71609018 podStartE2EDuration="5.368581381s" podCreationTimestamp="2026-03-13 20:43:44 +0000 UTC" firstStartedPulling="2026-03-13 20:43:46.311967931 +0000 UTC m=+986.328050334" lastFinishedPulling="2026-03-13 20:43:48.964459132 +0000 UTC m=+988.980541535" observedRunningTime="2026-03-13 20:43:49.363479612 +0000 UTC m=+989.379562015" watchObservedRunningTime="2026-03-13 20:43:49.368581381 +0000 UTC m=+989.384663784" Mar 13 20:43:55 crc kubenswrapper[5029]: I0313 20:43:55.225648 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:55 crc kubenswrapper[5029]: I0313 20:43:55.226648 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:55 crc kubenswrapper[5029]: I0313 20:43:55.267017 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:55 crc kubenswrapper[5029]: I0313 20:43:55.449696 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:55 crc kubenswrapper[5029]: I0313 20:43:55.500994 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wknvx"] Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.420428 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wknvx" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerName="registry-server" containerID="cri-o://95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab" gracePeriod=2 Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.823524 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.840329 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k5dn\" (UniqueName: \"kubernetes.io/projected/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-kube-api-access-7k5dn\") pod \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.840426 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-catalog-content\") pod \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.840475 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-utilities\") pod \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\" (UID: \"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3\") " Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.841620 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-utilities" (OuterVolumeSpecName: "utilities") pod "1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" (UID: "1fbb70aa-63ea-490c-a45d-fceb26f1cfa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.859619 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-kube-api-access-7k5dn" (OuterVolumeSpecName: "kube-api-access-7k5dn") pod "1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" (UID: "1fbb70aa-63ea-490c-a45d-fceb26f1cfa3"). InnerVolumeSpecName "kube-api-access-7k5dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.901338 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" (UID: "1fbb70aa-63ea-490c-a45d-fceb26f1cfa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.942587 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k5dn\" (UniqueName: \"kubernetes.io/projected/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-kube-api-access-7k5dn\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.942639 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:57 crc kubenswrapper[5029]: I0313 20:43:57.942656 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.431913 5029 generic.go:334] "Generic (PLEG): container finished" podID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerID="95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab" exitCode=0 Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.431979 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wknvx" event={"ID":"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3","Type":"ContainerDied","Data":"95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab"} Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.432023 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wknvx" event={"ID":"1fbb70aa-63ea-490c-a45d-fceb26f1cfa3","Type":"ContainerDied","Data":"e6b4e5f12e6e3e585e4d4ea24a0d9fd83cb3f75f1e4074b2ab2056be2c058baf"} Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.432051 5029 scope.go:117] "RemoveContainer" containerID="95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.432195 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wknvx" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.452870 5029 scope.go:117] "RemoveContainer" containerID="f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.490152 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wknvx"] Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.495494 5029 scope.go:117] "RemoveContainer" containerID="3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.498418 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wknvx"] Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.514056 5029 scope.go:117] "RemoveContainer" containerID="95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab" Mar 13 20:43:58 crc kubenswrapper[5029]: E0313 20:43:58.514726 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab\": container with ID starting with 95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab not found: ID does not exist" containerID="95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.514771 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab"} err="failed to get container status \"95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab\": rpc error: code = NotFound desc = could not find container \"95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab\": container with ID starting with 95ea7893c67b9ed1ae2fbc2aa888b1f75f994b9f9b5ad000f8f3889f365434ab not found: ID does not exist" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.514806 5029 scope.go:117] "RemoveContainer" containerID="f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d" Mar 13 20:43:58 crc kubenswrapper[5029]: E0313 20:43:58.515346 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d\": container with ID starting with f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d not found: ID does not exist" containerID="f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.515368 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d"} err="failed to get container status \"f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d\": rpc error: code = NotFound desc = could not find container \"f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d\": container with ID starting with f603154c2122ed5c1e3c27d7f0c31be3720a85002e89256ae983b42a7475bb0d not found: ID does not exist" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.515382 5029 scope.go:117] "RemoveContainer" containerID="3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8" Mar 13 20:43:58 crc kubenswrapper[5029]: E0313 20:43:58.515651 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8\": container with ID starting with 3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8 not found: ID does not exist" containerID="3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.515684 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8"} err="failed to get container status \"3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8\": rpc error: code = NotFound desc = could not find container \"3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8\": container with ID starting with 3381a241c8259a12851bf8dd36714cd7a0f3eff8f76c84e7141cf49f84ae8fd8 not found: ID does not exist" Mar 13 20:43:58 crc kubenswrapper[5029]: I0313 20:43:58.608138 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" path="/var/lib/kubelet/pods/1fbb70aa-63ea-490c-a45d-fceb26f1cfa3/volumes" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.133214 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557244-t2n87"] Mar 13 20:44:00 crc kubenswrapper[5029]: E0313 20:44:00.133703 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerName="extract-utilities" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.133715 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerName="extract-utilities" Mar 13 20:44:00 crc kubenswrapper[5029]: E0313 20:44:00.133733 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerName="extract-content" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.133741 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerName="extract-content" Mar 13 20:44:00 crc kubenswrapper[5029]: E0313 20:44:00.133754 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerName="registry-server" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.133760 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerName="registry-server" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.133897 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fbb70aa-63ea-490c-a45d-fceb26f1cfa3" containerName="registry-server" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.134561 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557244-t2n87" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.137119 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.137222 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.137348 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.144830 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557244-t2n87"] Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.175480 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9d75\" (UniqueName: \"kubernetes.io/projected/ea489c47-d9a5-433d-ae81-17d2a22b8b45-kube-api-access-f9d75\") pod \"auto-csr-approver-29557244-t2n87\" (UID: \"ea489c47-d9a5-433d-ae81-17d2a22b8b45\") " pod="openshift-infra/auto-csr-approver-29557244-t2n87" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.276434 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9d75\" (UniqueName: \"kubernetes.io/projected/ea489c47-d9a5-433d-ae81-17d2a22b8b45-kube-api-access-f9d75\") pod \"auto-csr-approver-29557244-t2n87\" (UID: \"ea489c47-d9a5-433d-ae81-17d2a22b8b45\") " pod="openshift-infra/auto-csr-approver-29557244-t2n87" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.296570 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9d75\" (UniqueName: \"kubernetes.io/projected/ea489c47-d9a5-433d-ae81-17d2a22b8b45-kube-api-access-f9d75\") pod \"auto-csr-approver-29557244-t2n87\" (UID: \"ea489c47-d9a5-433d-ae81-17d2a22b8b45\") " pod="openshift-infra/auto-csr-approver-29557244-t2n87" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.451274 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557244-t2n87" Mar 13 20:44:00 crc kubenswrapper[5029]: I0313 20:44:00.846696 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557244-t2n87"] Mar 13 20:44:01 crc kubenswrapper[5029]: I0313 20:44:01.258590 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b55d4cdb9-s58fm" Mar 13 20:44:01 crc kubenswrapper[5029]: I0313 20:44:01.450954 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557244-t2n87" event={"ID":"ea489c47-d9a5-433d-ae81-17d2a22b8b45","Type":"ContainerStarted","Data":"e700aac00425067614b354008125130601359812c4241623cda1af314bb95a94"} Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.072595 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-f26jf"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.078783 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.098786 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.099015 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.102386 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.102742 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zvzqg" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.103203 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.105207 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.114440 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.184315 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tp4f4"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.185313 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.189729 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.189928 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.190045 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.190268 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lhvcb" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.201638 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-tlxnq"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.202200 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-sockets\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.202245 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-startup\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.202272 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-reloader\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.202304 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62643dbe-126d-43e2-a08e-483ca7864ea6-metrics-certs\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.202322 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-conf\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.202344 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g55tr\" (UniqueName: \"kubernetes.io/projected/62643dbe-126d-43e2-a08e-483ca7864ea6-kube-api-access-g55tr\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.202362 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-metrics\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.202815 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.207006 5029 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.236228 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-tlxnq"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.303834 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vzr\" (UniqueName: \"kubernetes.io/projected/4ae672f1-e9e8-4adc-8b6d-a0005d030621-kube-api-access-s4vzr\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.303971 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-startup\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304002 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6n8l\" (UniqueName: \"kubernetes.io/projected/148d0749-47d0-44a8-b445-9464b9370508-kube-api-access-z6n8l\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304049 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-reloader\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304090 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8wr9\" (UniqueName: \"kubernetes.io/projected/f43167b4-ff02-4f87-98af-4f7e445e4620-kube-api-access-g8wr9\") pod \"frr-k8s-webhook-server-bcc4b6f68-mrnn8\" (UID: \"f43167b4-ff02-4f87-98af-4f7e445e4620\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304120 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f43167b4-ff02-4f87-98af-4f7e445e4620-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-mrnn8\" (UID: \"f43167b4-ff02-4f87-98af-4f7e445e4620\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304153 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304177 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62643dbe-126d-43e2-a08e-483ca7864ea6-metrics-certs\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304197 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-metrics-certs\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304223 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-conf\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304251 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g55tr\" (UniqueName: \"kubernetes.io/projected/62643dbe-126d-43e2-a08e-483ca7864ea6-kube-api-access-g55tr\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304273 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-metrics\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304302 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-metrics-certs\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304339 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-cert\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304362 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-sockets\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.304390 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ae672f1-e9e8-4adc-8b6d-a0005d030621-metallb-excludel2\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.305208 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-reloader\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.305744 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-startup\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.306314 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-metrics\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.306344 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-sockets\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.306423 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/62643dbe-126d-43e2-a08e-483ca7864ea6-frr-conf\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.312868 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62643dbe-126d-43e2-a08e-483ca7864ea6-metrics-certs\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.334776 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g55tr\" (UniqueName: \"kubernetes.io/projected/62643dbe-126d-43e2-a08e-483ca7864ea6-kube-api-access-g55tr\") pod \"frr-k8s-f26jf\" (UID: \"62643dbe-126d-43e2-a08e-483ca7864ea6\") " pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.386533 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5qdzq"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.388062 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406415 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-metrics-certs\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406509 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-cert\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406545 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ae672f1-e9e8-4adc-8b6d-a0005d030621-metallb-excludel2\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406586 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4vzr\" (UniqueName: \"kubernetes.io/projected/4ae672f1-e9e8-4adc-8b6d-a0005d030621-kube-api-access-s4vzr\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406614 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6n8l\" (UniqueName: \"kubernetes.io/projected/148d0749-47d0-44a8-b445-9464b9370508-kube-api-access-z6n8l\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406656 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8wr9\" (UniqueName: \"kubernetes.io/projected/f43167b4-ff02-4f87-98af-4f7e445e4620-kube-api-access-g8wr9\") pod \"frr-k8s-webhook-server-bcc4b6f68-mrnn8\" (UID: \"f43167b4-ff02-4f87-98af-4f7e445e4620\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406684 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f43167b4-ff02-4f87-98af-4f7e445e4620-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-mrnn8\" (UID: \"f43167b4-ff02-4f87-98af-4f7e445e4620\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406709 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.406728 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-metrics-certs\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: E0313 20:44:02.406939 5029 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 13 20:44:02 crc kubenswrapper[5029]: E0313 20:44:02.407027 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-metrics-certs podName:148d0749-47d0-44a8-b445-9464b9370508 nodeName:}" failed. No retries permitted until 2026-03-13 20:44:02.906996031 +0000 UTC m=+1002.923078424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-metrics-certs") pod "controller-7bb4cc7c98-tlxnq" (UID: "148d0749-47d0-44a8-b445-9464b9370508") : secret "controller-certs-secret" not found Mar 13 20:44:02 crc kubenswrapper[5029]: E0313 20:44:02.407271 5029 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 20:44:02 crc kubenswrapper[5029]: E0313 20:44:02.407359 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist podName:4ae672f1-e9e8-4adc-8b6d-a0005d030621 nodeName:}" failed. No retries permitted until 2026-03-13 20:44:02.90733606 +0000 UTC m=+1002.923418653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist") pod "speaker-tp4f4" (UID: "4ae672f1-e9e8-4adc-8b6d-a0005d030621") : secret "metallb-memberlist" not found Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.408929 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ae672f1-e9e8-4adc-8b6d-a0005d030621-metallb-excludel2\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.410667 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f43167b4-ff02-4f87-98af-4f7e445e4620-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-mrnn8\" (UID: \"f43167b4-ff02-4f87-98af-4f7e445e4620\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.413884 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-metrics-certs\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.418366 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-cert\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.419331 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qdzq"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.432874 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6n8l\" (UniqueName: \"kubernetes.io/projected/148d0749-47d0-44a8-b445-9464b9370508-kube-api-access-z6n8l\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.435492 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.447137 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4vzr\" (UniqueName: \"kubernetes.io/projected/4ae672f1-e9e8-4adc-8b6d-a0005d030621-kube-api-access-s4vzr\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.449825 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8wr9\" (UniqueName: \"kubernetes.io/projected/f43167b4-ff02-4f87-98af-4f7e445e4620-kube-api-access-g8wr9\") pod \"frr-k8s-webhook-server-bcc4b6f68-mrnn8\" (UID: \"f43167b4-ff02-4f87-98af-4f7e445e4620\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.450160 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.508943 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5ptq\" (UniqueName: \"kubernetes.io/projected/a43ada03-a22a-4bc0-bb38-242a917d3562-kube-api-access-q5ptq\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.509503 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-utilities\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.509711 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-catalog-content\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.611176 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5ptq\" (UniqueName: \"kubernetes.io/projected/a43ada03-a22a-4bc0-bb38-242a917d3562-kube-api-access-q5ptq\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.611698 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-utilities\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.611743 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-catalog-content\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.612903 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-utilities\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.613057 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-catalog-content\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.635890 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5ptq\" (UniqueName: \"kubernetes.io/projected/a43ada03-a22a-4bc0-bb38-242a917d3562-kube-api-access-q5ptq\") pod \"community-operators-5qdzq\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.711934 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.741148 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8"] Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.916318 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-metrics-certs\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.916999 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:02 crc kubenswrapper[5029]: E0313 20:44:02.917197 5029 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 20:44:02 crc kubenswrapper[5029]: E0313 20:44:02.917296 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist podName:4ae672f1-e9e8-4adc-8b6d-a0005d030621 nodeName:}" failed. No retries permitted until 2026-03-13 20:44:03.917269852 +0000 UTC m=+1003.933352255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist") pod "speaker-tp4f4" (UID: "4ae672f1-e9e8-4adc-8b6d-a0005d030621") : secret "metallb-memberlist" not found Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.923705 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/148d0749-47d0-44a8-b445-9464b9370508-metrics-certs\") pod \"controller-7bb4cc7c98-tlxnq\" (UID: \"148d0749-47d0-44a8-b445-9464b9370508\") " pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:02 crc kubenswrapper[5029]: I0313 20:44:02.982556 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qdzq"] Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.151358 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.433611 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-tlxnq"] Mar 13 20:44:03 crc kubenswrapper[5029]: W0313 20:44:03.439806 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod148d0749_47d0_44a8_b445_9464b9370508.slice/crio-d2f751229da704456140ca0076894ca9fd31aa59c42d822c42545070c308c8ef WatchSource:0}: Error finding container d2f751229da704456140ca0076894ca9fd31aa59c42d822c42545070c308c8ef: Status 404 returned error can't find the container with id d2f751229da704456140ca0076894ca9fd31aa59c42d822c42545070c308c8ef Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.485078 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" event={"ID":"f43167b4-ff02-4f87-98af-4f7e445e4620","Type":"ContainerStarted","Data":"fa6c0f1866423d2ea799ec28100d09f07fdbcb17db228447e4076e2c3f068348"} Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.487477 5029 generic.go:334] "Generic (PLEG): container finished" podID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerID="e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84" exitCode=0 Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.488240 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qdzq" event={"ID":"a43ada03-a22a-4bc0-bb38-242a917d3562","Type":"ContainerDied","Data":"e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84"} Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.488260 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qdzq" event={"ID":"a43ada03-a22a-4bc0-bb38-242a917d3562","Type":"ContainerStarted","Data":"c7aa28d9ef8b91d0cc77fa1769692b0121967ef94b9290158d785a479e130336"} Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.491486 5029 generic.go:334] "Generic (PLEG): container finished" podID="ea489c47-d9a5-433d-ae81-17d2a22b8b45" containerID="20b28818464ddc70db1b2cd377a70d954e626278e81423aee0e2982033924bb1" exitCode=0 Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.491697 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557244-t2n87" event={"ID":"ea489c47-d9a5-433d-ae81-17d2a22b8b45","Type":"ContainerDied","Data":"20b28818464ddc70db1b2cd377a70d954e626278e81423aee0e2982033924bb1"} Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.493634 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerStarted","Data":"5192ad0b677eaa3943d9b40df3c542e3258c6522f30694539816eff1c7352ffa"} Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.495170 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-tlxnq" event={"ID":"148d0749-47d0-44a8-b445-9464b9370508","Type":"ContainerStarted","Data":"d2f751229da704456140ca0076894ca9fd31aa59c42d822c42545070c308c8ef"} Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.932162 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:03 crc kubenswrapper[5029]: I0313 20:44:03.947102 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ae672f1-e9e8-4adc-8b6d-a0005d030621-memberlist\") pod \"speaker-tp4f4\" (UID: \"4ae672f1-e9e8-4adc-8b6d-a0005d030621\") " pod="metallb-system/speaker-tp4f4" Mar 13 20:44:04 crc kubenswrapper[5029]: I0313 20:44:04.042199 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tp4f4" Mar 13 20:44:04 crc kubenswrapper[5029]: I0313 20:44:04.506022 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-tlxnq" event={"ID":"148d0749-47d0-44a8-b445-9464b9370508","Type":"ContainerStarted","Data":"1a6d84124c894e7da5547ddef712afa0707584adc87aa431e0ded3ed72a8b4ba"} Mar 13 20:44:04 crc kubenswrapper[5029]: I0313 20:44:04.506629 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-tlxnq" event={"ID":"148d0749-47d0-44a8-b445-9464b9370508","Type":"ContainerStarted","Data":"8f63ed70ec9ff7d55fa29257e7126be4015c515ad3d478548d22c402a220d73e"} Mar 13 20:44:04 crc kubenswrapper[5029]: I0313 20:44:04.506654 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:04 crc kubenswrapper[5029]: I0313 20:44:04.507532 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tp4f4" event={"ID":"4ae672f1-e9e8-4adc-8b6d-a0005d030621","Type":"ContainerStarted","Data":"1fc29d1c0f81b003f559b82d71e8f5409f597f9e4b0b71b21cada725cb44c908"} Mar 13 20:44:04 crc kubenswrapper[5029]: I0313 20:44:04.511523 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qdzq" event={"ID":"a43ada03-a22a-4bc0-bb38-242a917d3562","Type":"ContainerStarted","Data":"3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910"} Mar 13 20:44:04 crc kubenswrapper[5029]: I0313 20:44:04.541346 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-tlxnq" podStartSLOduration=2.541307576 podStartE2EDuration="2.541307576s" podCreationTimestamp="2026-03-13 20:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:44:04.534526663 +0000 UTC m=+1004.550609096" watchObservedRunningTime="2026-03-13 20:44:04.541307576 +0000 UTC m=+1004.557389979" Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.048727 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557244-t2n87" Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.163402 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9d75\" (UniqueName: \"kubernetes.io/projected/ea489c47-d9a5-433d-ae81-17d2a22b8b45-kube-api-access-f9d75\") pod \"ea489c47-d9a5-433d-ae81-17d2a22b8b45\" (UID: \"ea489c47-d9a5-433d-ae81-17d2a22b8b45\") " Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.169370 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea489c47-d9a5-433d-ae81-17d2a22b8b45-kube-api-access-f9d75" (OuterVolumeSpecName: "kube-api-access-f9d75") pod "ea489c47-d9a5-433d-ae81-17d2a22b8b45" (UID: "ea489c47-d9a5-433d-ae81-17d2a22b8b45"). InnerVolumeSpecName "kube-api-access-f9d75". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.265373 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9d75\" (UniqueName: \"kubernetes.io/projected/ea489c47-d9a5-433d-ae81-17d2a22b8b45-kube-api-access-f9d75\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.544512 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tp4f4" event={"ID":"4ae672f1-e9e8-4adc-8b6d-a0005d030621","Type":"ContainerStarted","Data":"1e6b06981e43df8891e9468cd0c2e9fdc89e324e720bb0ac72b2f9b4b2743458"} Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.544563 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tp4f4" event={"ID":"4ae672f1-e9e8-4adc-8b6d-a0005d030621","Type":"ContainerStarted","Data":"b4a828e57f127c035bfd94fe05faf6b62df5e4d7ff9245e26018ac3d90273660"} Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.545769 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tp4f4" Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.559337 5029 generic.go:334] "Generic (PLEG): container finished" podID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerID="3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910" exitCode=0 Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.559440 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qdzq" event={"ID":"a43ada03-a22a-4bc0-bb38-242a917d3562","Type":"ContainerDied","Data":"3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910"} Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.568825 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557244-t2n87" Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.569953 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557244-t2n87" event={"ID":"ea489c47-d9a5-433d-ae81-17d2a22b8b45","Type":"ContainerDied","Data":"e700aac00425067614b354008125130601359812c4241623cda1af314bb95a94"} Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.570036 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e700aac00425067614b354008125130601359812c4241623cda1af314bb95a94" Mar 13 20:44:05 crc kubenswrapper[5029]: I0313 20:44:05.580617 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tp4f4" podStartSLOduration=3.580593487 podStartE2EDuration="3.580593487s" podCreationTimestamp="2026-03-13 20:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:44:05.577582845 +0000 UTC m=+1005.593665258" watchObservedRunningTime="2026-03-13 20:44:05.580593487 +0000 UTC m=+1005.596675890" Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.122764 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557238-f9gcs"] Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.128242 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557238-f9gcs"] Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.579917 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qdzq" event={"ID":"a43ada03-a22a-4bc0-bb38-242a917d3562","Type":"ContainerStarted","Data":"885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677"} Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.608656 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5qdzq" podStartSLOduration=2.118664505 podStartE2EDuration="4.608638873s" podCreationTimestamp="2026-03-13 20:44:02 +0000 UTC" firstStartedPulling="2026-03-13 20:44:03.489212208 +0000 UTC m=+1003.505294611" lastFinishedPulling="2026-03-13 20:44:05.979186576 +0000 UTC m=+1005.995268979" observedRunningTime="2026-03-13 20:44:06.605754825 +0000 UTC m=+1006.621837238" watchObservedRunningTime="2026-03-13 20:44:06.608638873 +0000 UTC m=+1006.624721276" Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.611194 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bdfb127-5a55-4091-88f0-0d36e140afab" path="/var/lib/kubelet/pods/4bdfb127-5a55-4091-88f0-0d36e140afab/volumes" Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.971249 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgtvh"] Mar 13 20:44:06 crc kubenswrapper[5029]: E0313 20:44:06.971688 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea489c47-d9a5-433d-ae81-17d2a22b8b45" containerName="oc" Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.971706 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea489c47-d9a5-433d-ae81-17d2a22b8b45" containerName="oc" Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.971933 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea489c47-d9a5-433d-ae81-17d2a22b8b45" containerName="oc" Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.973146 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:06 crc kubenswrapper[5029]: I0313 20:44:06.983777 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgtvh"] Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.097845 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-catalog-content\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.098000 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84cd\" (UniqueName: \"kubernetes.io/projected/9d8dd452-b89e-4e07-a4d5-e31c42415df8-kube-api-access-c84cd\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.098044 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-utilities\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.199026 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-catalog-content\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.199125 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84cd\" (UniqueName: \"kubernetes.io/projected/9d8dd452-b89e-4e07-a4d5-e31c42415df8-kube-api-access-c84cd\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.199171 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-utilities\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.199807 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-catalog-content\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.199934 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-utilities\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.239959 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84cd\" (UniqueName: \"kubernetes.io/projected/9d8dd452-b89e-4e07-a4d5-e31c42415df8-kube-api-access-c84cd\") pod \"redhat-marketplace-zgtvh\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.355526 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:07 crc kubenswrapper[5029]: I0313 20:44:07.751611 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgtvh"] Mar 13 20:44:07 crc kubenswrapper[5029]: W0313 20:44:07.786353 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d8dd452_b89e_4e07_a4d5_e31c42415df8.slice/crio-18e09fd7111432e5b897cd7cd71ddf25aeaea164d196351a6949e4caea9c0a93 WatchSource:0}: Error finding container 18e09fd7111432e5b897cd7cd71ddf25aeaea164d196351a6949e4caea9c0a93: Status 404 returned error can't find the container with id 18e09fd7111432e5b897cd7cd71ddf25aeaea164d196351a6949e4caea9c0a93 Mar 13 20:44:08 crc kubenswrapper[5029]: I0313 20:44:08.617569 5029 generic.go:334] "Generic (PLEG): container finished" podID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerID="dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb" exitCode=0 Mar 13 20:44:08 crc kubenswrapper[5029]: I0313 20:44:08.617687 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgtvh" event={"ID":"9d8dd452-b89e-4e07-a4d5-e31c42415df8","Type":"ContainerDied","Data":"dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb"} Mar 13 20:44:08 crc kubenswrapper[5029]: I0313 20:44:08.617959 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgtvh" event={"ID":"9d8dd452-b89e-4e07-a4d5-e31c42415df8","Type":"ContainerStarted","Data":"18e09fd7111432e5b897cd7cd71ddf25aeaea164d196351a6949e4caea9c0a93"} Mar 13 20:44:12 crc kubenswrapper[5029]: I0313 20:44:12.713480 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:12 crc kubenswrapper[5029]: I0313 20:44:12.713946 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:12 crc kubenswrapper[5029]: I0313 20:44:12.759503 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:13 crc kubenswrapper[5029]: I0313 20:44:13.155292 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-tlxnq" Mar 13 20:44:13 crc kubenswrapper[5029]: I0313 20:44:13.694502 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:13 crc kubenswrapper[5029]: I0313 20:44:13.733802 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qdzq"] Mar 13 20:44:14 crc kubenswrapper[5029]: I0313 20:44:14.046771 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tp4f4" Mar 13 20:44:14 crc kubenswrapper[5029]: I0313 20:44:14.665004 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" event={"ID":"f43167b4-ff02-4f87-98af-4f7e445e4620","Type":"ContainerStarted","Data":"a0e228a47bf9d0c7141396805a49a442b486363f327bbaef15afbbc083cc88b8"} Mar 13 20:44:14 crc kubenswrapper[5029]: I0313 20:44:14.665724 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:14 crc kubenswrapper[5029]: I0313 20:44:14.670426 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerStarted","Data":"49d9389ea2da9d23cebe67a38359f34182735792d315d0f7cfec138c84aa0db7"} Mar 13 20:44:14 crc kubenswrapper[5029]: I0313 20:44:14.689781 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" podStartSLOduration=0.980149089 podStartE2EDuration="12.689758499s" podCreationTimestamp="2026-03-13 20:44:02 +0000 UTC" firstStartedPulling="2026-03-13 20:44:02.762313736 +0000 UTC m=+1002.778396139" lastFinishedPulling="2026-03-13 20:44:14.471923146 +0000 UTC m=+1014.488005549" observedRunningTime="2026-03-13 20:44:14.679581643 +0000 UTC m=+1014.695664046" watchObservedRunningTime="2026-03-13 20:44:14.689758499 +0000 UTC m=+1014.705840902" Mar 13 20:44:15 crc kubenswrapper[5029]: I0313 20:44:15.678656 5029 generic.go:334] "Generic (PLEG): container finished" podID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerID="d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90" exitCode=0 Mar 13 20:44:15 crc kubenswrapper[5029]: I0313 20:44:15.678717 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgtvh" event={"ID":"9d8dd452-b89e-4e07-a4d5-e31c42415df8","Type":"ContainerDied","Data":"d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90"} Mar 13 20:44:15 crc kubenswrapper[5029]: I0313 20:44:15.681767 5029 generic.go:334] "Generic (PLEG): container finished" podID="62643dbe-126d-43e2-a08e-483ca7864ea6" containerID="49d9389ea2da9d23cebe67a38359f34182735792d315d0f7cfec138c84aa0db7" exitCode=0 Mar 13 20:44:15 crc kubenswrapper[5029]: I0313 20:44:15.681790 5029 generic.go:334] "Generic (PLEG): container finished" podID="62643dbe-126d-43e2-a08e-483ca7864ea6" containerID="24531c5c0a4391db949c93494ee4132231548578b62bd7bfcf838ce5af98e633" exitCode=0 Mar 13 20:44:15 crc kubenswrapper[5029]: I0313 20:44:15.681967 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5qdzq" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerName="registry-server" containerID="cri-o://885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677" gracePeriod=2 Mar 13 20:44:15 crc kubenswrapper[5029]: I0313 20:44:15.682548 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerDied","Data":"49d9389ea2da9d23cebe67a38359f34182735792d315d0f7cfec138c84aa0db7"} Mar 13 20:44:15 crc kubenswrapper[5029]: I0313 20:44:15.682588 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerDied","Data":"24531c5c0a4391db949c93494ee4132231548578b62bd7bfcf838ce5af98e633"} Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.056638 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.163547 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-catalog-content\") pod \"a43ada03-a22a-4bc0-bb38-242a917d3562\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.163634 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-utilities\") pod \"a43ada03-a22a-4bc0-bb38-242a917d3562\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.163704 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5ptq\" (UniqueName: \"kubernetes.io/projected/a43ada03-a22a-4bc0-bb38-242a917d3562-kube-api-access-q5ptq\") pod \"a43ada03-a22a-4bc0-bb38-242a917d3562\" (UID: \"a43ada03-a22a-4bc0-bb38-242a917d3562\") " Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.166626 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-utilities" (OuterVolumeSpecName: "utilities") pod "a43ada03-a22a-4bc0-bb38-242a917d3562" (UID: "a43ada03-a22a-4bc0-bb38-242a917d3562"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.178822 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43ada03-a22a-4bc0-bb38-242a917d3562-kube-api-access-q5ptq" (OuterVolumeSpecName: "kube-api-access-q5ptq") pod "a43ada03-a22a-4bc0-bb38-242a917d3562" (UID: "a43ada03-a22a-4bc0-bb38-242a917d3562"). InnerVolumeSpecName "kube-api-access-q5ptq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.252504 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a43ada03-a22a-4bc0-bb38-242a917d3562" (UID: "a43ada03-a22a-4bc0-bb38-242a917d3562"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.265242 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.265275 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43ada03-a22a-4bc0-bb38-242a917d3562-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.265286 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5ptq\" (UniqueName: \"kubernetes.io/projected/a43ada03-a22a-4bc0-bb38-242a917d3562-kube-api-access-q5ptq\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.691613 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgtvh" event={"ID":"9d8dd452-b89e-4e07-a4d5-e31c42415df8","Type":"ContainerStarted","Data":"0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4"} Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.693972 5029 generic.go:334] "Generic (PLEG): container finished" podID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerID="885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677" exitCode=0 Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.694048 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qdzq" event={"ID":"a43ada03-a22a-4bc0-bb38-242a917d3562","Type":"ContainerDied","Data":"885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677"} Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.694081 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qdzq" event={"ID":"a43ada03-a22a-4bc0-bb38-242a917d3562","Type":"ContainerDied","Data":"c7aa28d9ef8b91d0cc77fa1769692b0121967ef94b9290158d785a479e130336"} Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.694100 5029 scope.go:117] "RemoveContainer" containerID="885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.694232 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qdzq" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.698884 5029 generic.go:334] "Generic (PLEG): container finished" podID="62643dbe-126d-43e2-a08e-483ca7864ea6" containerID="61a29297de9e30d3bb77feffd1dbc6f85b5b91d4a42f16e18bbf715aff3903c9" exitCode=0 Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.698928 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerDied","Data":"61a29297de9e30d3bb77feffd1dbc6f85b5b91d4a42f16e18bbf715aff3903c9"} Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.710312 5029 scope.go:117] "RemoveContainer" containerID="3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.746941 5029 scope.go:117] "RemoveContainer" containerID="e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.756988 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgtvh" podStartSLOduration=9.022657286 podStartE2EDuration="10.756966622s" podCreationTimestamp="2026-03-13 20:44:06 +0000 UTC" firstStartedPulling="2026-03-13 20:44:14.347660814 +0000 UTC m=+1014.363743217" lastFinishedPulling="2026-03-13 20:44:16.08197015 +0000 UTC m=+1016.098052553" observedRunningTime="2026-03-13 20:44:16.720954844 +0000 UTC m=+1016.737037267" watchObservedRunningTime="2026-03-13 20:44:16.756966622 +0000 UTC m=+1016.773049025" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.777460 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qdzq"] Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.783796 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5qdzq"] Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.792734 5029 scope.go:117] "RemoveContainer" containerID="885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677" Mar 13 20:44:16 crc kubenswrapper[5029]: E0313 20:44:16.793247 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677\": container with ID starting with 885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677 not found: ID does not exist" containerID="885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.793284 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677"} err="failed to get container status \"885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677\": rpc error: code = NotFound desc = could not find container \"885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677\": container with ID starting with 885cb5847275d7656b4381743e2055e8352baf708b102b14ff86700106514677 not found: ID does not exist" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.793310 5029 scope.go:117] "RemoveContainer" containerID="3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910" Mar 13 20:44:16 crc kubenswrapper[5029]: E0313 20:44:16.793616 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910\": container with ID starting with 3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910 not found: ID does not exist" containerID="3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.793639 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910"} err="failed to get container status \"3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910\": rpc error: code = NotFound desc = could not find container \"3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910\": container with ID starting with 3e281ecea352bcc5ad756d9527708a152e757205c6ac3997dc5bbcfe3c124910 not found: ID does not exist" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.793656 5029 scope.go:117] "RemoveContainer" containerID="e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84" Mar 13 20:44:16 crc kubenswrapper[5029]: E0313 20:44:16.794224 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84\": container with ID starting with e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84 not found: ID does not exist" containerID="e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84" Mar 13 20:44:16 crc kubenswrapper[5029]: I0313 20:44:16.794324 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84"} err="failed to get container status \"e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84\": rpc error: code = NotFound desc = could not find container \"e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84\": container with ID starting with e24db012ac6c00dc1571a6d423339d743df0baedec846f232e9836d2e7075a84 not found: ID does not exist" Mar 13 20:44:17 crc kubenswrapper[5029]: I0313 20:44:17.356262 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:17 crc kubenswrapper[5029]: I0313 20:44:17.356625 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:17 crc kubenswrapper[5029]: I0313 20:44:17.714566 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerStarted","Data":"e2671907837e7a41ce971e86300b222cde08c6c6ca1e1130eba5f141a5b1b03b"} Mar 13 20:44:17 crc kubenswrapper[5029]: I0313 20:44:17.714611 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerStarted","Data":"c8affda2f1ec66fc090d1940d3981cd334902d1312c6901ad4aaaba99fd30a5e"} Mar 13 20:44:17 crc kubenswrapper[5029]: I0313 20:44:17.714623 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerStarted","Data":"aac1ed8d1eb10dae7e0bc20164e001c447d46f675a278860b6d6010515a857be"} Mar 13 20:44:17 crc kubenswrapper[5029]: I0313 20:44:17.714633 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerStarted","Data":"604e4cffdca299aad48cc40dc582483127259315283751485928f64257cdd4dc"} Mar 13 20:44:17 crc kubenswrapper[5029]: I0313 20:44:17.714643 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerStarted","Data":"9d0e3f7247c6611ee8f8f28e2c253a336999b404e815f1fae93bba8a9ce5532a"} Mar 13 20:44:18 crc kubenswrapper[5029]: I0313 20:44:18.399878 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zgtvh" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="registry-server" probeResult="failure" output=< Mar 13 20:44:18 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 20:44:18 crc kubenswrapper[5029]: > Mar 13 20:44:18 crc kubenswrapper[5029]: I0313 20:44:18.608011 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" path="/var/lib/kubelet/pods/a43ada03-a22a-4bc0-bb38-242a917d3562/volumes" Mar 13 20:44:18 crc kubenswrapper[5029]: I0313 20:44:18.725981 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f26jf" event={"ID":"62643dbe-126d-43e2-a08e-483ca7864ea6","Type":"ContainerStarted","Data":"133a27dc883eace7d6c6ceecd67d18dd2908d58e9e8d5afb82f6889b9597ada1"} Mar 13 20:44:18 crc kubenswrapper[5029]: I0313 20:44:18.750353 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-f26jf" podStartSLOduration=4.879584175 podStartE2EDuration="16.750194936s" podCreationTimestamp="2026-03-13 20:44:02 +0000 UTC" firstStartedPulling="2026-03-13 20:44:02.623165669 +0000 UTC m=+1002.639248072" lastFinishedPulling="2026-03-13 20:44:14.49377643 +0000 UTC m=+1014.509858833" observedRunningTime="2026-03-13 20:44:18.745981203 +0000 UTC m=+1018.762063616" watchObservedRunningTime="2026-03-13 20:44:18.750194936 +0000 UTC m=+1018.766277339" Mar 13 20:44:19 crc kubenswrapper[5029]: I0313 20:44:19.733142 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:21 crc kubenswrapper[5029]: I0313 20:44:21.454512 5029 scope.go:117] "RemoveContainer" containerID="de5fc7d829ab5dc5803488a115f8948eaeab4f58a3f887ee9246b55998bc7687" Mar 13 20:44:22 crc kubenswrapper[5029]: I0313 20:44:22.435986 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:22 crc kubenswrapper[5029]: I0313 20:44:22.475078 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:22 crc kubenswrapper[5029]: I0313 20:44:22.998754 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gd4fj"] Mar 13 20:44:22 crc kubenswrapper[5029]: E0313 20:44:22.999098 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerName="extract-content" Mar 13 20:44:22 crc kubenswrapper[5029]: I0313 20:44:22.999117 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerName="extract-content" Mar 13 20:44:22 crc kubenswrapper[5029]: E0313 20:44:22.999134 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerName="registry-server" Mar 13 20:44:22 crc kubenswrapper[5029]: I0313 20:44:22.999140 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerName="registry-server" Mar 13 20:44:22 crc kubenswrapper[5029]: E0313 20:44:22.999150 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerName="extract-utilities" Mar 13 20:44:22 crc kubenswrapper[5029]: I0313 20:44:22.999156 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerName="extract-utilities" Mar 13 20:44:22 crc kubenswrapper[5029]: I0313 20:44:22.999270 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43ada03-a22a-4bc0-bb38-242a917d3562" containerName="registry-server" Mar 13 20:44:22 crc kubenswrapper[5029]: I0313 20:44:22.999758 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.002737 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.002964 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.002971 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8b7rr" Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.011486 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gd4fj"] Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.154993 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8w7\" (UniqueName: \"kubernetes.io/projected/d94530ec-fdc6-4023-bca6-b8b62ed8f029-kube-api-access-pw8w7\") pod \"openstack-operator-index-gd4fj\" (UID: \"d94530ec-fdc6-4023-bca6-b8b62ed8f029\") " pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.256441 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw8w7\" (UniqueName: \"kubernetes.io/projected/d94530ec-fdc6-4023-bca6-b8b62ed8f029-kube-api-access-pw8w7\") pod \"openstack-operator-index-gd4fj\" (UID: \"d94530ec-fdc6-4023-bca6-b8b62ed8f029\") " pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.276885 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw8w7\" (UniqueName: \"kubernetes.io/projected/d94530ec-fdc6-4023-bca6-b8b62ed8f029-kube-api-access-pw8w7\") pod \"openstack-operator-index-gd4fj\" (UID: \"d94530ec-fdc6-4023-bca6-b8b62ed8f029\") " pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.318000 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.728176 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gd4fj"] Mar 13 20:44:23 crc kubenswrapper[5029]: I0313 20:44:23.757400 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gd4fj" event={"ID":"d94530ec-fdc6-4023-bca6-b8b62ed8f029","Type":"ContainerStarted","Data":"b93c27de069d091c165d523edec0297be12e087f98e771d51e3cb0820e146e5a"} Mar 13 20:44:26 crc kubenswrapper[5029]: I0313 20:44:26.776382 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gd4fj" event={"ID":"d94530ec-fdc6-4023-bca6-b8b62ed8f029","Type":"ContainerStarted","Data":"212aac4d41789942415579a18a34df96fce32fabb2dfa1e4bde173711c486fb1"} Mar 13 20:44:26 crc kubenswrapper[5029]: I0313 20:44:26.796784 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gd4fj" podStartSLOduration=2.705083928 podStartE2EDuration="4.796768266s" podCreationTimestamp="2026-03-13 20:44:22 +0000 UTC" firstStartedPulling="2026-03-13 20:44:23.73847786 +0000 UTC m=+1023.754560263" lastFinishedPulling="2026-03-13 20:44:25.830162198 +0000 UTC m=+1025.846244601" observedRunningTime="2026-03-13 20:44:26.790725191 +0000 UTC m=+1026.806807594" watchObservedRunningTime="2026-03-13 20:44:26.796768266 +0000 UTC m=+1026.812850669" Mar 13 20:44:27 crc kubenswrapper[5029]: I0313 20:44:27.400796 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:27 crc kubenswrapper[5029]: I0313 20:44:27.445490 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.190130 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgtvh"] Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.190778 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgtvh" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="registry-server" containerID="cri-o://0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4" gracePeriod=2 Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.553042 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.659498 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-catalog-content\") pod \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.659687 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c84cd\" (UniqueName: \"kubernetes.io/projected/9d8dd452-b89e-4e07-a4d5-e31c42415df8-kube-api-access-c84cd\") pod \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.659742 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-utilities\") pod \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\" (UID: \"9d8dd452-b89e-4e07-a4d5-e31c42415df8\") " Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.660991 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-utilities" (OuterVolumeSpecName: "utilities") pod "9d8dd452-b89e-4e07-a4d5-e31c42415df8" (UID: "9d8dd452-b89e-4e07-a4d5-e31c42415df8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.670351 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8dd452-b89e-4e07-a4d5-e31c42415df8-kube-api-access-c84cd" (OuterVolumeSpecName: "kube-api-access-c84cd") pod "9d8dd452-b89e-4e07-a4d5-e31c42415df8" (UID: "9d8dd452-b89e-4e07-a4d5-e31c42415df8"). InnerVolumeSpecName "kube-api-access-c84cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.690754 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d8dd452-b89e-4e07-a4d5-e31c42415df8" (UID: "9d8dd452-b89e-4e07-a4d5-e31c42415df8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.761553 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.761586 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c84cd\" (UniqueName: \"kubernetes.io/projected/9d8dd452-b89e-4e07-a4d5-e31c42415df8-kube-api-access-c84cd\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.761597 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8dd452-b89e-4e07-a4d5-e31c42415df8-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.799379 5029 generic.go:334] "Generic (PLEG): container finished" podID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerID="0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4" exitCode=0 Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.799422 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgtvh" event={"ID":"9d8dd452-b89e-4e07-a4d5-e31c42415df8","Type":"ContainerDied","Data":"0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4"} Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.799452 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgtvh" event={"ID":"9d8dd452-b89e-4e07-a4d5-e31c42415df8","Type":"ContainerDied","Data":"18e09fd7111432e5b897cd7cd71ddf25aeaea164d196351a6949e4caea9c0a93"} Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.799468 5029 scope.go:117] "RemoveContainer" containerID="0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.799478 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgtvh" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.816969 5029 scope.go:117] "RemoveContainer" containerID="d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.825625 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgtvh"] Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.833459 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgtvh"] Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.853683 5029 scope.go:117] "RemoveContainer" containerID="dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.869146 5029 scope.go:117] "RemoveContainer" containerID="0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4" Mar 13 20:44:30 crc kubenswrapper[5029]: E0313 20:44:30.869702 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4\": container with ID starting with 0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4 not found: ID does not exist" containerID="0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.869748 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4"} err="failed to get container status \"0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4\": rpc error: code = NotFound desc = could not find container \"0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4\": container with ID starting with 0e2f9dc5d96a54558c6238f2b9adb28cee9aad2e7dc7ad16c6a951559bc0d0f4 not found: ID does not exist" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.869774 5029 scope.go:117] "RemoveContainer" containerID="d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90" Mar 13 20:44:30 crc kubenswrapper[5029]: E0313 20:44:30.870195 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90\": container with ID starting with d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90 not found: ID does not exist" containerID="d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.870228 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90"} err="failed to get container status \"d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90\": rpc error: code = NotFound desc = could not find container \"d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90\": container with ID starting with d00d3ead660eeec6f85c5fc646a57d418c32297a07aba688ca0873e283d95c90 not found: ID does not exist" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.870255 5029 scope.go:117] "RemoveContainer" containerID="dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb" Mar 13 20:44:30 crc kubenswrapper[5029]: E0313 20:44:30.870656 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb\": container with ID starting with dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb not found: ID does not exist" containerID="dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb" Mar 13 20:44:30 crc kubenswrapper[5029]: I0313 20:44:30.870694 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb"} err="failed to get container status \"dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb\": rpc error: code = NotFound desc = could not find container \"dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb\": container with ID starting with dc84943856a3bd5c5bbf6d968a76d98b9ff7f263f1eb60ffdc4834fefb5f3feb not found: ID does not exist" Mar 13 20:44:32 crc kubenswrapper[5029]: I0313 20:44:32.438965 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-f26jf" Mar 13 20:44:32 crc kubenswrapper[5029]: I0313 20:44:32.457540 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mrnn8" Mar 13 20:44:32 crc kubenswrapper[5029]: I0313 20:44:32.607046 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" path="/var/lib/kubelet/pods/9d8dd452-b89e-4e07-a4d5-e31c42415df8/volumes" Mar 13 20:44:33 crc kubenswrapper[5029]: I0313 20:44:33.318601 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:33 crc kubenswrapper[5029]: I0313 20:44:33.318658 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:33 crc kubenswrapper[5029]: I0313 20:44:33.344695 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:33 crc kubenswrapper[5029]: I0313 20:44:33.845646 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gd4fj" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.034713 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4"] Mar 13 20:44:35 crc kubenswrapper[5029]: E0313 20:44:35.034998 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="registry-server" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.035011 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="registry-server" Mar 13 20:44:35 crc kubenswrapper[5029]: E0313 20:44:35.035025 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="extract-utilities" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.035031 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="extract-utilities" Mar 13 20:44:35 crc kubenswrapper[5029]: E0313 20:44:35.035930 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="extract-content" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.035971 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="extract-content" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.036280 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8dd452-b89e-4e07-a4d5-e31c42415df8" containerName="registry-server" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.037494 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.040663 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-k527b" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.046068 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4"] Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.125386 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-bundle\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.125457 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5zd\" (UniqueName: \"kubernetes.io/projected/00c86530-24c9-45c2-857a-44a29dba7ec3-kube-api-access-ql5zd\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.125506 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-util\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.226924 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-bundle\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.226988 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql5zd\" (UniqueName: \"kubernetes.io/projected/00c86530-24c9-45c2-857a-44a29dba7ec3-kube-api-access-ql5zd\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.227019 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-util\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.227437 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-util\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.227489 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-bundle\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.246063 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql5zd\" (UniqueName: \"kubernetes.io/projected/00c86530-24c9-45c2-857a-44a29dba7ec3-kube-api-access-ql5zd\") pod \"a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.360610 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.658977 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4"] Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.856730 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" event={"ID":"00c86530-24c9-45c2-857a-44a29dba7ec3","Type":"ContainerStarted","Data":"f7590e4602e78427df1f591164fc70cbe166bfa49bc015e6ae5ce449344af78b"} Mar 13 20:44:35 crc kubenswrapper[5029]: I0313 20:44:35.857158 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" event={"ID":"00c86530-24c9-45c2-857a-44a29dba7ec3","Type":"ContainerStarted","Data":"d0ee7c45c7fd7a952ec754abdb5538a19dcc6dddc8f618a6b702c884c544efc0"} Mar 13 20:44:36 crc kubenswrapper[5029]: I0313 20:44:36.862613 5029 generic.go:334] "Generic (PLEG): container finished" podID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerID="f7590e4602e78427df1f591164fc70cbe166bfa49bc015e6ae5ce449344af78b" exitCode=0 Mar 13 20:44:36 crc kubenswrapper[5029]: I0313 20:44:36.862661 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" event={"ID":"00c86530-24c9-45c2-857a-44a29dba7ec3","Type":"ContainerDied","Data":"f7590e4602e78427df1f591164fc70cbe166bfa49bc015e6ae5ce449344af78b"} Mar 13 20:44:37 crc kubenswrapper[5029]: I0313 20:44:37.872054 5029 generic.go:334] "Generic (PLEG): container finished" podID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerID="6638fac395a9cc7e3bda68efba088b8c9a689623114874fc1686e0e22b7df72d" exitCode=0 Mar 13 20:44:37 crc kubenswrapper[5029]: I0313 20:44:37.872147 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" event={"ID":"00c86530-24c9-45c2-857a-44a29dba7ec3","Type":"ContainerDied","Data":"6638fac395a9cc7e3bda68efba088b8c9a689623114874fc1686e0e22b7df72d"} Mar 13 20:44:38 crc kubenswrapper[5029]: I0313 20:44:38.882370 5029 generic.go:334] "Generic (PLEG): container finished" podID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerID="b163ccbe3cbb0ea95336770c35604f282475da0b472fbaafa2d3d5a55da45e99" exitCode=0 Mar 13 20:44:38 crc kubenswrapper[5029]: I0313 20:44:38.882484 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" event={"ID":"00c86530-24c9-45c2-857a-44a29dba7ec3","Type":"ContainerDied","Data":"b163ccbe3cbb0ea95336770c35604f282475da0b472fbaafa2d3d5a55da45e99"} Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.163436 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.356806 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-util\") pod \"00c86530-24c9-45c2-857a-44a29dba7ec3\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.356996 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql5zd\" (UniqueName: \"kubernetes.io/projected/00c86530-24c9-45c2-857a-44a29dba7ec3-kube-api-access-ql5zd\") pod \"00c86530-24c9-45c2-857a-44a29dba7ec3\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.357095 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-bundle\") pod \"00c86530-24c9-45c2-857a-44a29dba7ec3\" (UID: \"00c86530-24c9-45c2-857a-44a29dba7ec3\") " Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.358209 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-bundle" (OuterVolumeSpecName: "bundle") pod "00c86530-24c9-45c2-857a-44a29dba7ec3" (UID: "00c86530-24c9-45c2-857a-44a29dba7ec3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.367976 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c86530-24c9-45c2-857a-44a29dba7ec3-kube-api-access-ql5zd" (OuterVolumeSpecName: "kube-api-access-ql5zd") pod "00c86530-24c9-45c2-857a-44a29dba7ec3" (UID: "00c86530-24c9-45c2-857a-44a29dba7ec3"). InnerVolumeSpecName "kube-api-access-ql5zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.372237 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-util" (OuterVolumeSpecName: "util") pod "00c86530-24c9-45c2-857a-44a29dba7ec3" (UID: "00c86530-24c9-45c2-857a-44a29dba7ec3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.458959 5029 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-util\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.459002 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql5zd\" (UniqueName: \"kubernetes.io/projected/00c86530-24c9-45c2-857a-44a29dba7ec3-kube-api-access-ql5zd\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.459015 5029 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00c86530-24c9-45c2-857a-44a29dba7ec3-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.900962 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" event={"ID":"00c86530-24c9-45c2-857a-44a29dba7ec3","Type":"ContainerDied","Data":"d0ee7c45c7fd7a952ec754abdb5538a19dcc6dddc8f618a6b702c884c544efc0"} Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.901030 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4" Mar 13 20:44:40 crc kubenswrapper[5029]: I0313 20:44:40.901044 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ee7c45c7fd7a952ec754abdb5538a19dcc6dddc8f618a6b702c884c544efc0" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.364294 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h"] Mar 13 20:44:44 crc kubenswrapper[5029]: E0313 20:44:44.365186 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerName="pull" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.365201 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerName="pull" Mar 13 20:44:44 crc kubenswrapper[5029]: E0313 20:44:44.365213 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerName="extract" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.365222 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerName="extract" Mar 13 20:44:44 crc kubenswrapper[5029]: E0313 20:44:44.365243 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerName="util" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.365251 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerName="util" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.365391 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c86530-24c9-45c2-857a-44a29dba7ec3" containerName="extract" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.365890 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.405447 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sk9z6" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.413067 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfrrd\" (UniqueName: \"kubernetes.io/projected/f889ccf3-c017-4e72-8f23-d5355cbade76-kube-api-access-bfrrd\") pod \"openstack-operator-controller-init-5c46d6fb64-hjq8h\" (UID: \"f889ccf3-c017-4e72-8f23-d5355cbade76\") " pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.425784 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h"] Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.514098 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfrrd\" (UniqueName: \"kubernetes.io/projected/f889ccf3-c017-4e72-8f23-d5355cbade76-kube-api-access-bfrrd\") pod \"openstack-operator-controller-init-5c46d6fb64-hjq8h\" (UID: \"f889ccf3-c017-4e72-8f23-d5355cbade76\") " pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.535286 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfrrd\" (UniqueName: \"kubernetes.io/projected/f889ccf3-c017-4e72-8f23-d5355cbade76-kube-api-access-bfrrd\") pod \"openstack-operator-controller-init-5c46d6fb64-hjq8h\" (UID: \"f889ccf3-c017-4e72-8f23-d5355cbade76\") " pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" Mar 13 20:44:44 crc kubenswrapper[5029]: I0313 20:44:44.713478 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" Mar 13 20:44:45 crc kubenswrapper[5029]: I0313 20:44:45.180864 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h"] Mar 13 20:44:45 crc kubenswrapper[5029]: W0313 20:44:45.191601 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf889ccf3_c017_4e72_8f23_d5355cbade76.slice/crio-12804663e129cc6a8f52c5bba153baea433e1a989f23f39ef6307771a64ffa44 WatchSource:0}: Error finding container 12804663e129cc6a8f52c5bba153baea433e1a989f23f39ef6307771a64ffa44: Status 404 returned error can't find the container with id 12804663e129cc6a8f52c5bba153baea433e1a989f23f39ef6307771a64ffa44 Mar 13 20:44:45 crc kubenswrapper[5029]: I0313 20:44:45.944567 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" event={"ID":"f889ccf3-c017-4e72-8f23-d5355cbade76","Type":"ContainerStarted","Data":"12804663e129cc6a8f52c5bba153baea433e1a989f23f39ef6307771a64ffa44"} Mar 13 20:44:49 crc kubenswrapper[5029]: I0313 20:44:49.975923 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" event={"ID":"f889ccf3-c017-4e72-8f23-d5355cbade76","Type":"ContainerStarted","Data":"49136767abb22a65c39120b15642b6b71dd1dc03d0adc18a021fbf36add56db6"} Mar 13 20:44:49 crc kubenswrapper[5029]: I0313 20:44:49.976447 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" Mar 13 20:44:50 crc kubenswrapper[5029]: I0313 20:44:50.011371 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" podStartSLOduration=2.318790702 podStartE2EDuration="6.011342433s" podCreationTimestamp="2026-03-13 20:44:44 +0000 UTC" firstStartedPulling="2026-03-13 20:44:45.194580379 +0000 UTC m=+1045.210662782" lastFinishedPulling="2026-03-13 20:44:48.88713211 +0000 UTC m=+1048.903214513" observedRunningTime="2026-03-13 20:44:50.001778844 +0000 UTC m=+1050.017861287" watchObservedRunningTime="2026-03-13 20:44:50.011342433 +0000 UTC m=+1050.027424856" Mar 13 20:44:54 crc kubenswrapper[5029]: I0313 20:44:54.717677 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5c46d6fb64-hjq8h" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.144431 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9"] Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.146612 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.149247 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.150675 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.151752 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9"] Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.248603 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-config-volume\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.248984 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f94rh\" (UniqueName: \"kubernetes.io/projected/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-kube-api-access-f94rh\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.249018 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-secret-volume\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.350638 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-config-volume\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.350685 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f94rh\" (UniqueName: \"kubernetes.io/projected/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-kube-api-access-f94rh\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.350707 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-secret-volume\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.351650 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-config-volume\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.356891 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-secret-volume\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.372538 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f94rh\" (UniqueName: \"kubernetes.io/projected/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-kube-api-access-f94rh\") pod \"collect-profiles-29557245-tqct9\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.466121 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:00 crc kubenswrapper[5029]: I0313 20:45:00.924395 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9"] Mar 13 20:45:01 crc kubenswrapper[5029]: I0313 20:45:01.042283 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" event={"ID":"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098","Type":"ContainerStarted","Data":"81ba8b4e407c01525891e96e6c802debb72d6270aa3782a5f6a490a985d2b9cb"} Mar 13 20:45:02 crc kubenswrapper[5029]: I0313 20:45:02.073490 5029 generic.go:334] "Generic (PLEG): container finished" podID="b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098" containerID="e6885b9648e6492fb5950a6562c7f58f4243dd3e8fa4620eb3a6aa097f6375b2" exitCode=0 Mar 13 20:45:02 crc kubenswrapper[5029]: I0313 20:45:02.073788 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" event={"ID":"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098","Type":"ContainerDied","Data":"e6885b9648e6492fb5950a6562c7f58f4243dd3e8fa4620eb3a6aa097f6375b2"} Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.344615 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.498069 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-secret-volume\") pod \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.498420 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-config-volume\") pod \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.498516 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f94rh\" (UniqueName: \"kubernetes.io/projected/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-kube-api-access-f94rh\") pod \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\" (UID: \"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098\") " Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.499392 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-config-volume" (OuterVolumeSpecName: "config-volume") pod "b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098" (UID: "b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.506391 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098" (UID: "b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.506726 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-kube-api-access-f94rh" (OuterVolumeSpecName: "kube-api-access-f94rh") pod "b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098" (UID: "b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098"). InnerVolumeSpecName "kube-api-access-f94rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.599936 5029 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.599986 5029 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 20:45:03 crc kubenswrapper[5029]: I0313 20:45:03.600003 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f94rh\" (UniqueName: \"kubernetes.io/projected/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098-kube-api-access-f94rh\") on node \"crc\" DevicePath \"\"" Mar 13 20:45:04 crc kubenswrapper[5029]: I0313 20:45:04.088013 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" event={"ID":"b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098","Type":"ContainerDied","Data":"81ba8b4e407c01525891e96e6c802debb72d6270aa3782a5f6a490a985d2b9cb"} Mar 13 20:45:04 crc kubenswrapper[5029]: I0313 20:45:04.088061 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ba8b4e407c01525891e96e6c802debb72d6270aa3782a5f6a490a985d2b9cb" Mar 13 20:45:04 crc kubenswrapper[5029]: I0313 20:45:04.088059 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.719227 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn"] Mar 13 20:45:13 crc kubenswrapper[5029]: E0313 20:45:13.720318 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098" containerName="collect-profiles" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.720335 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098" containerName="collect-profiles" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.720584 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098" containerName="collect-profiles" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.721230 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.723378 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nvzqv" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.728221 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.729256 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.730997 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-z5x7w" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.739689 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zbs\" (UniqueName: \"kubernetes.io/projected/5af430c9-929c-4f4b-8a2e-0b346433c966-kube-api-access-p2zbs\") pod \"barbican-operator-controller-manager-d47688694-cmqrn\" (UID: \"5af430c9-929c-4f4b-8a2e-0b346433c966\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.739771 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdcj\" (UniqueName: \"kubernetes.io/projected/62985a1a-96c3-413d-b4ba-1e30082b4252-kube-api-access-5cdcj\") pod \"cinder-operator-controller-manager-984cd4dcf-wss56\" (UID: \"62985a1a-96c3-413d-b4ba-1e30082b4252\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.746545 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.752655 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.774708 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.777523 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.783741 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kd8xm" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.791804 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.796398 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.798420 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.804408 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2nvpc" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.820505 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.826788 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.828000 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.831295 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-9jkh6" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.841072 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cdcj\" (UniqueName: \"kubernetes.io/projected/62985a1a-96c3-413d-b4ba-1e30082b4252-kube-api-access-5cdcj\") pod \"cinder-operator-controller-manager-984cd4dcf-wss56\" (UID: \"62985a1a-96c3-413d-b4ba-1e30082b4252\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.841412 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84spt\" (UniqueName: \"kubernetes.io/projected/8572f8c5-5098-41a3-8596-e93818c51912-kube-api-access-84spt\") pod \"heat-operator-controller-manager-77b6666d85-8nx6k\" (UID: \"8572f8c5-5098-41a3-8596-e93818c51912\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.848027 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4zdf\" (UniqueName: \"kubernetes.io/projected/cb6725e8-bfb1-4ae6-884c-d70e86c2e268-kube-api-access-m4zdf\") pod \"glance-operator-controller-manager-5964f64c48-jtfsz\" (UID: \"cb6725e8-bfb1-4ae6-884c-d70e86c2e268\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.848339 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2zbs\" (UniqueName: \"kubernetes.io/projected/5af430c9-929c-4f4b-8a2e-0b346433c966-kube-api-access-p2zbs\") pod \"barbican-operator-controller-manager-d47688694-cmqrn\" (UID: \"5af430c9-929c-4f4b-8a2e-0b346433c966\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.848552 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nxdw\" (UniqueName: \"kubernetes.io/projected/9fae77a6-7657-435b-9eaa-46738bd3adff-kube-api-access-8nxdw\") pod \"designate-operator-controller-manager-66d56f6ff4-djjwn\" (UID: \"9fae77a6-7657-435b-9eaa-46738bd3adff\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.857930 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.858792 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.862657 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-twvlm" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.884239 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.887828 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2zbs\" (UniqueName: \"kubernetes.io/projected/5af430c9-929c-4f4b-8a2e-0b346433c966-kube-api-access-p2zbs\") pod \"barbican-operator-controller-manager-d47688694-cmqrn\" (UID: \"5af430c9-929c-4f4b-8a2e-0b346433c966\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.898644 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cdcj\" (UniqueName: \"kubernetes.io/projected/62985a1a-96c3-413d-b4ba-1e30082b4252-kube-api-access-5cdcj\") pod \"cinder-operator-controller-manager-984cd4dcf-wss56\" (UID: \"62985a1a-96c3-413d-b4ba-1e30082b4252\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.900322 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.905977 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.906834 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.916299 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hxd6g" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.916579 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.932011 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.939979 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz"] Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.960016 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkm66\" (UniqueName: \"kubernetes.io/projected/c78e7c55-5a08-44a3-9ab9-8229d3b63c95-kube-api-access-gkm66\") pod \"horizon-operator-controller-manager-6d9d6b584d-pthcv\" (UID: \"c78e7c55-5a08-44a3-9ab9-8229d3b63c95\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.960104 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84spt\" (UniqueName: \"kubernetes.io/projected/8572f8c5-5098-41a3-8596-e93818c51912-kube-api-access-84spt\") pod \"heat-operator-controller-manager-77b6666d85-8nx6k\" (UID: \"8572f8c5-5098-41a3-8596-e93818c51912\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.960180 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4zdf\" (UniqueName: \"kubernetes.io/projected/cb6725e8-bfb1-4ae6-884c-d70e86c2e268-kube-api-access-m4zdf\") pod \"glance-operator-controller-manager-5964f64c48-jtfsz\" (UID: \"cb6725e8-bfb1-4ae6-884c-d70e86c2e268\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.960528 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nxdw\" (UniqueName: \"kubernetes.io/projected/9fae77a6-7657-435b-9eaa-46738bd3adff-kube-api-access-8nxdw\") pod \"designate-operator-controller-manager-66d56f6ff4-djjwn\" (UID: \"9fae77a6-7657-435b-9eaa-46738bd3adff\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" Mar 13 20:45:13 crc kubenswrapper[5029]: I0313 20:45:13.993023 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.001635 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nxdw\" (UniqueName: \"kubernetes.io/projected/9fae77a6-7657-435b-9eaa-46738bd3adff-kube-api-access-8nxdw\") pod \"designate-operator-controller-manager-66d56f6ff4-djjwn\" (UID: \"9fae77a6-7657-435b-9eaa-46738bd3adff\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.043106 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84spt\" (UniqueName: \"kubernetes.io/projected/8572f8c5-5098-41a3-8596-e93818c51912-kube-api-access-84spt\") pod \"heat-operator-controller-manager-77b6666d85-8nx6k\" (UID: \"8572f8c5-5098-41a3-8596-e93818c51912\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.046881 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.048942 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kscsh" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.053884 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.056174 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4zdf\" (UniqueName: \"kubernetes.io/projected/cb6725e8-bfb1-4ae6-884c-d70e86c2e268-kube-api-access-m4zdf\") pod \"glance-operator-controller-manager-5964f64c48-jtfsz\" (UID: \"cb6725e8-bfb1-4ae6-884c-d70e86c2e268\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.058123 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.070560 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkm66\" (UniqueName: \"kubernetes.io/projected/c78e7c55-5a08-44a3-9ab9-8229d3b63c95-kube-api-access-gkm66\") pod \"horizon-operator-controller-manager-6d9d6b584d-pthcv\" (UID: \"c78e7c55-5a08-44a3-9ab9-8229d3b63c95\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.070619 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtx5d\" (UniqueName: \"kubernetes.io/projected/03ada4f5-407f-4ce4-8cdd-b91ba50d6e24-kube-api-access-jtx5d\") pod \"ironic-operator-controller-manager-5bc894d9b-qvzqz\" (UID: \"03ada4f5-407f-4ce4-8cdd-b91ba50d6e24\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.070666 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.070685 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pxg\" (UniqueName: \"kubernetes.io/projected/9885322a-6140-443a-9c3a-d21a4674c0f9-kube-api-access-h6pxg\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.080177 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.081134 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.091704 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-n5zkp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.093704 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.106973 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.110909 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkm66\" (UniqueName: \"kubernetes.io/projected/c78e7c55-5a08-44a3-9ab9-8229d3b63c95-kube-api-access-gkm66\") pod \"horizon-operator-controller-manager-6d9d6b584d-pthcv\" (UID: \"c78e7c55-5a08-44a3-9ab9-8229d3b63c95\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.118309 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.121529 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.122621 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.126648 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-x9gmk" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.128942 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.130112 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.132100 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-q9mn6" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.142980 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.149673 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.150137 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.151021 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.157157 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.163201 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-w4nx5" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.163411 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.172266 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqr7r\" (UniqueName: \"kubernetes.io/projected/e5ca1347-56a7-4fea-8256-0728bc438b76-kube-api-access-fqr7r\") pod \"keystone-operator-controller-manager-684f77d66d-gzknz\" (UID: \"e5ca1347-56a7-4fea-8256-0728bc438b76\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.172361 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvwj\" (UniqueName: \"kubernetes.io/projected/0ea96653-f3ad-443c-85cb-27806cc8d02f-kube-api-access-4nvwj\") pod \"manila-operator-controller-manager-57b484b4df-dqb4l\" (UID: \"0ea96653-f3ad-443c-85cb-27806cc8d02f\") " pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.172401 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtx5d\" (UniqueName: \"kubernetes.io/projected/03ada4f5-407f-4ce4-8cdd-b91ba50d6e24-kube-api-access-jtx5d\") pod \"ironic-operator-controller-manager-5bc894d9b-qvzqz\" (UID: \"03ada4f5-407f-4ce4-8cdd-b91ba50d6e24\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.172456 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95fn7\" (UniqueName: \"kubernetes.io/projected/465d67e8-1ca2-4c48-9ea6-5a46f41e4333-kube-api-access-95fn7\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-5stmj\" (UID: \"465d67e8-1ca2-4c48-9ea6-5a46f41e4333\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.172483 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884q6\" (UniqueName: \"kubernetes.io/projected/0bbae089-e35f-4e2a-98f9-3348cb910e91-kube-api-access-884q6\") pod \"neutron-operator-controller-manager-776c5696bf-wkr5q\" (UID: \"0bbae089-e35f-4e2a-98f9-3348cb910e91\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.172517 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.172547 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6pxg\" (UniqueName: \"kubernetes.io/projected/9885322a-6140-443a-9c3a-d21a4674c0f9-kube-api-access-h6pxg\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.173221 5029 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.173284 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert podName:9885322a-6140-443a-9c3a-d21a4674c0f9 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:14.673262353 +0000 UTC m=+1074.689344756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert") pod "infra-operator-controller-manager-54dc5b8f8d-jwm4t" (UID: "9885322a-6140-443a-9c3a-d21a4674c0f9") : secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.180286 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.180815 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-strvq"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.181916 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.184659 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-nvmrj" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.186353 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-strvq"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.194106 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.195215 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.196021 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6pxg\" (UniqueName: \"kubernetes.io/projected/9885322a-6140-443a-9c3a-d21a4674c0f9-kube-api-access-h6pxg\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.197152 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-sv2zg" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.199230 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtx5d\" (UniqueName: \"kubernetes.io/projected/03ada4f5-407f-4ce4-8cdd-b91ba50d6e24-kube-api-access-jtx5d\") pod \"ironic-operator-controller-manager-5bc894d9b-qvzqz\" (UID: \"03ada4f5-407f-4ce4-8cdd-b91ba50d6e24\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.200573 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.229355 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.231643 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.238762 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7df2s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.264282 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.275192 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxt5\" (UniqueName: \"kubernetes.io/projected/54ccdb4e-12ea-481d-b139-21820e7cb430-kube-api-access-cwxt5\") pod \"ovn-operator-controller-manager-bbc5b68f9-r6d75\" (UID: \"54ccdb4e-12ea-481d-b139-21820e7cb430\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.275473 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqr7r\" (UniqueName: \"kubernetes.io/projected/e5ca1347-56a7-4fea-8256-0728bc438b76-kube-api-access-fqr7r\") pod \"keystone-operator-controller-manager-684f77d66d-gzknz\" (UID: \"e5ca1347-56a7-4fea-8256-0728bc438b76\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.275530 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6h5\" (UniqueName: \"kubernetes.io/projected/246360b4-7120-4eb9-b734-cfd22fb35bc6-kube-api-access-wq6h5\") pod \"nova-operator-controller-manager-7f84474648-strvq\" (UID: \"246360b4-7120-4eb9-b734-cfd22fb35bc6\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.275567 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvwj\" (UniqueName: \"kubernetes.io/projected/0ea96653-f3ad-443c-85cb-27806cc8d02f-kube-api-access-4nvwj\") pod \"manila-operator-controller-manager-57b484b4df-dqb4l\" (UID: \"0ea96653-f3ad-443c-85cb-27806cc8d02f\") " pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.275598 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2v9\" (UniqueName: \"kubernetes.io/projected/60caa364-7d62-4d19-8de1-6b231b90adb7-kube-api-access-nb2v9\") pod \"octavia-operator-controller-manager-5f4f55cb5c-2zjps\" (UID: \"60caa364-7d62-4d19-8de1-6b231b90adb7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.275646 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95fn7\" (UniqueName: \"kubernetes.io/projected/465d67e8-1ca2-4c48-9ea6-5a46f41e4333-kube-api-access-95fn7\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-5stmj\" (UID: \"465d67e8-1ca2-4c48-9ea6-5a46f41e4333\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.275666 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-884q6\" (UniqueName: \"kubernetes.io/projected/0bbae089-e35f-4e2a-98f9-3348cb910e91-kube-api-access-884q6\") pod \"neutron-operator-controller-manager-776c5696bf-wkr5q\" (UID: \"0bbae089-e35f-4e2a-98f9-3348cb910e91\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.287381 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.288358 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.302561 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qp6p6" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.303158 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvwj\" (UniqueName: \"kubernetes.io/projected/0ea96653-f3ad-443c-85cb-27806cc8d02f-kube-api-access-4nvwj\") pod \"manila-operator-controller-manager-57b484b4df-dqb4l\" (UID: \"0ea96653-f3ad-443c-85cb-27806cc8d02f\") " pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.312179 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqr7r\" (UniqueName: \"kubernetes.io/projected/e5ca1347-56a7-4fea-8256-0728bc438b76-kube-api-access-fqr7r\") pod \"keystone-operator-controller-manager-684f77d66d-gzknz\" (UID: \"e5ca1347-56a7-4fea-8256-0728bc438b76\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.325742 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95fn7\" (UniqueName: \"kubernetes.io/projected/465d67e8-1ca2-4c48-9ea6-5a46f41e4333-kube-api-access-95fn7\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-5stmj\" (UID: \"465d67e8-1ca2-4c48-9ea6-5a46f41e4333\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.327327 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.331234 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.338305 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-884q6\" (UniqueName: \"kubernetes.io/projected/0bbae089-e35f-4e2a-98f9-3348cb910e91-kube-api-access-884q6\") pod \"neutron-operator-controller-manager-776c5696bf-wkr5q\" (UID: \"0bbae089-e35f-4e2a-98f9-3348cb910e91\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.339384 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gj28z" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.339631 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.341007 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.342085 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.345672 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-x8rhv" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.352628 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.360657 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.375791 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.379727 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq6h5\" (UniqueName: \"kubernetes.io/projected/246360b4-7120-4eb9-b734-cfd22fb35bc6-kube-api-access-wq6h5\") pod \"nova-operator-controller-manager-7f84474648-strvq\" (UID: \"246360b4-7120-4eb9-b734-cfd22fb35bc6\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.379792 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2v9\" (UniqueName: \"kubernetes.io/projected/60caa364-7d62-4d19-8de1-6b231b90adb7-kube-api-access-nb2v9\") pod \"octavia-operator-controller-manager-5f4f55cb5c-2zjps\" (UID: \"60caa364-7d62-4d19-8de1-6b231b90adb7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.379901 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxt5\" (UniqueName: \"kubernetes.io/projected/54ccdb4e-12ea-481d-b139-21820e7cb430-kube-api-access-cwxt5\") pod \"ovn-operator-controller-manager-bbc5b68f9-r6d75\" (UID: \"54ccdb4e-12ea-481d-b139-21820e7cb430\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.415137 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.416250 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.423739 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.425008 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq6h5\" (UniqueName: \"kubernetes.io/projected/246360b4-7120-4eb9-b734-cfd22fb35bc6-kube-api-access-wq6h5\") pod \"nova-operator-controller-manager-7f84474648-strvq\" (UID: \"246360b4-7120-4eb9-b734-cfd22fb35bc6\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.433564 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2v9\" (UniqueName: \"kubernetes.io/projected/60caa364-7d62-4d19-8de1-6b231b90adb7-kube-api-access-nb2v9\") pod \"octavia-operator-controller-manager-5f4f55cb5c-2zjps\" (UID: \"60caa364-7d62-4d19-8de1-6b231b90adb7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.446700 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.455653 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nc9f2" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.470118 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.481642 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.481703 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkcq\" (UniqueName: \"kubernetes.io/projected/b7d71625-72b5-4359-92ed-1931a3fe6b96-kube-api-access-rqkcq\") pod \"placement-operator-controller-manager-574d45c66c-h2xd9\" (UID: \"b7d71625-72b5-4359-92ed-1931a3fe6b96\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.481729 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47h6k\" (UniqueName: \"kubernetes.io/projected/2ec9fbff-bc5a-402c-9af7-f5cb8febf410-kube-api-access-47h6k\") pod \"swift-operator-controller-manager-7f9cc5dd44-p2j7s\" (UID: \"2ec9fbff-bc5a-402c-9af7-f5cb8febf410\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.481794 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9brvn\" (UniqueName: \"kubernetes.io/projected/5f05cebc-30a2-43ca-8ecf-31853a8f2600-kube-api-access-9brvn\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.487631 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.498745 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxt5\" (UniqueName: \"kubernetes.io/projected/54ccdb4e-12ea-481d-b139-21820e7cb430-kube-api-access-cwxt5\") pod \"ovn-operator-controller-manager-bbc5b68f9-r6d75\" (UID: \"54ccdb4e-12ea-481d-b139-21820e7cb430\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.509958 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.511311 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.515472 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6sk65" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.525367 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.537352 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.550776 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.552366 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.582931 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9brvn\" (UniqueName: \"kubernetes.io/projected/5f05cebc-30a2-43ca-8ecf-31853a8f2600-kube-api-access-9brvn\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.583054 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2sl\" (UniqueName: \"kubernetes.io/projected/ed2536ff-a21c-4134-9acc-6d6dcc2243e4-kube-api-access-7z2sl\") pod \"telemetry-operator-controller-manager-6854b8b9d9-przwp\" (UID: \"ed2536ff-a21c-4134-9acc-6d6dcc2243e4\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.583109 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.583148 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkcq\" (UniqueName: \"kubernetes.io/projected/b7d71625-72b5-4359-92ed-1931a3fe6b96-kube-api-access-rqkcq\") pod \"placement-operator-controller-manager-574d45c66c-h2xd9\" (UID: \"b7d71625-72b5-4359-92ed-1931a3fe6b96\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.583229 5029 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.583374 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47h6k\" (UniqueName: \"kubernetes.io/projected/2ec9fbff-bc5a-402c-9af7-f5cb8febf410-kube-api-access-47h6k\") pod \"swift-operator-controller-manager-7f9cc5dd44-p2j7s\" (UID: \"2ec9fbff-bc5a-402c-9af7-f5cb8febf410\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.583553 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert podName:5f05cebc-30a2-43ca-8ecf-31853a8f2600 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:15.083418476 +0000 UTC m=+1075.099500869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" (UID: "5f05cebc-30a2-43ca-8ecf-31853a8f2600") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.612749 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9brvn\" (UniqueName: \"kubernetes.io/projected/5f05cebc-30a2-43ca-8ecf-31853a8f2600-kube-api-access-9brvn\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.613111 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.630458 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkcq\" (UniqueName: \"kubernetes.io/projected/b7d71625-72b5-4359-92ed-1931a3fe6b96-kube-api-access-rqkcq\") pod \"placement-operator-controller-manager-574d45c66c-h2xd9\" (UID: \"b7d71625-72b5-4359-92ed-1931a3fe6b96\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.634249 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.636968 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47h6k\" (UniqueName: \"kubernetes.io/projected/2ec9fbff-bc5a-402c-9af7-f5cb8febf410-kube-api-access-47h6k\") pod \"swift-operator-controller-manager-7f9cc5dd44-p2j7s\" (UID: \"2ec9fbff-bc5a-402c-9af7-f5cb8febf410\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.673652 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.674596 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.674704 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.677801 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6vrpm" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.685802 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.688573 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st926\" (UniqueName: \"kubernetes.io/projected/1b78339c-69bb-4905-af68-29313b2e2227-kube-api-access-st926\") pod \"watcher-operator-controller-manager-6c4d75f7f9-4ckwc\" (UID: \"1b78339c-69bb-4905-af68-29313b2e2227\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.688825 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.688962 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z2sl\" (UniqueName: \"kubernetes.io/projected/ed2536ff-a21c-4134-9acc-6d6dcc2243e4-kube-api-access-7z2sl\") pod \"telemetry-operator-controller-manager-6854b8b9d9-przwp\" (UID: \"ed2536ff-a21c-4134-9acc-6d6dcc2243e4\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.689043 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdbrw\" (UniqueName: \"kubernetes.io/projected/df55c0eb-db5c-48b7-9b8b-997253cb8510-kube-api-access-mdbrw\") pod \"test-operator-controller-manager-5c5cb9c4d7-4rbtk\" (UID: \"df55c0eb-db5c-48b7-9b8b-997253cb8510\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.689191 5029 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.689239 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert podName:9885322a-6140-443a-9c3a-d21a4674c0f9 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:15.689223831 +0000 UTC m=+1075.705306234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert") pod "infra-operator-controller-manager-54dc5b8f8d-jwm4t" (UID: "9885322a-6140-443a-9c3a-d21a4674c0f9") : secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.688617 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.696748 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.697938 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.701092 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tkdpm" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.701342 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.701475 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.713311 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z2sl\" (UniqueName: \"kubernetes.io/projected/ed2536ff-a21c-4134-9acc-6d6dcc2243e4-kube-api-access-7z2sl\") pod \"telemetry-operator-controller-manager-6854b8b9d9-przwp\" (UID: \"ed2536ff-a21c-4134-9acc-6d6dcc2243e4\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.718947 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.720193 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.725738 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s"] Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.726035 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vb9rv" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.758945 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.790909 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.790968 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr28w\" (UniqueName: \"kubernetes.io/projected/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-kube-api-access-rr28w\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.791511 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdbrw\" (UniqueName: \"kubernetes.io/projected/df55c0eb-db5c-48b7-9b8b-997253cb8510-kube-api-access-mdbrw\") pod \"test-operator-controller-manager-5c5cb9c4d7-4rbtk\" (UID: \"df55c0eb-db5c-48b7-9b8b-997253cb8510\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.791592 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sgjb\" (UniqueName: \"kubernetes.io/projected/4730a688-7219-434b-8ab5-88c3023144e1-kube-api-access-9sgjb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lm87s\" (UID: \"4730a688-7219-434b-8ab5-88c3023144e1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.791639 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st926\" (UniqueName: \"kubernetes.io/projected/1b78339c-69bb-4905-af68-29313b2e2227-kube-api-access-st926\") pod \"watcher-operator-controller-manager-6c4d75f7f9-4ckwc\" (UID: \"1b78339c-69bb-4905-af68-29313b2e2227\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.791708 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.813265 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdbrw\" (UniqueName: \"kubernetes.io/projected/df55c0eb-db5c-48b7-9b8b-997253cb8510-kube-api-access-mdbrw\") pod \"test-operator-controller-manager-5c5cb9c4d7-4rbtk\" (UID: \"df55c0eb-db5c-48b7-9b8b-997253cb8510\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.815114 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st926\" (UniqueName: \"kubernetes.io/projected/1b78339c-69bb-4905-af68-29313b2e2227-kube-api-access-st926\") pod \"watcher-operator-controller-manager-6c4d75f7f9-4ckwc\" (UID: \"1b78339c-69bb-4905-af68-29313b2e2227\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.895105 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sgjb\" (UniqueName: \"kubernetes.io/projected/4730a688-7219-434b-8ab5-88c3023144e1-kube-api-access-9sgjb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lm87s\" (UID: \"4730a688-7219-434b-8ab5-88c3023144e1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.896392 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.896502 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.896535 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr28w\" (UniqueName: \"kubernetes.io/projected/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-kube-api-access-rr28w\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.896935 5029 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.896987 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:15.396967485 +0000 UTC m=+1075.413049888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "webhook-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.897150 5029 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: E0313 20:45:14.897180 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:15.397170561 +0000 UTC m=+1075.413252964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "metrics-server-cert" not found Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.906410 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.924710 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.929684 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sgjb\" (UniqueName: \"kubernetes.io/projected/4730a688-7219-434b-8ab5-88c3023144e1-kube-api-access-9sgjb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lm87s\" (UID: \"4730a688-7219-434b-8ab5-88c3023144e1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.934997 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr28w\" (UniqueName: \"kubernetes.io/projected/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-kube-api-access-rr28w\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:14 crc kubenswrapper[5029]: I0313 20:45:14.962123 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz"] Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.070347 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.102755 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:15 crc kubenswrapper[5029]: E0313 20:45:15.102999 5029 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:15 crc kubenswrapper[5029]: E0313 20:45:15.103427 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert podName:5f05cebc-30a2-43ca-8ecf-31853a8f2600 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:16.103375952 +0000 UTC m=+1076.119458355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" (UID: "5f05cebc-30a2-43ca-8ecf-31853a8f2600") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.137593 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.203010 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" event={"ID":"cb6725e8-bfb1-4ae6-884c-d70e86c2e268","Type":"ContainerStarted","Data":"12cfd6d2aeb4d0d141a60b6d9817ab330fe904e25e236cc523eac427191fbdaa"} Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.408800 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.408958 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:15 crc kubenswrapper[5029]: E0313 20:45:15.409109 5029 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 20:45:15 crc kubenswrapper[5029]: E0313 20:45:15.409163 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:16.409146681 +0000 UTC m=+1076.425229084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "metrics-server-cert" not found Mar 13 20:45:15 crc kubenswrapper[5029]: E0313 20:45:15.409109 5029 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 20:45:15 crc kubenswrapper[5029]: E0313 20:45:15.409484 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:16.409476159 +0000 UTC m=+1076.425558562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "webhook-server-cert" not found Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.495220 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn"] Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.517830 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56"] Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.720709 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:15 crc kubenswrapper[5029]: E0313 20:45:15.722937 5029 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:15 crc kubenswrapper[5029]: E0313 20:45:15.722980 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert podName:9885322a-6140-443a-9c3a-d21a4674c0f9 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:17.722966557 +0000 UTC m=+1077.739048960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert") pod "infra-operator-controller-manager-54dc5b8f8d-jwm4t" (UID: "9885322a-6140-443a-9c3a-d21a4674c0f9") : secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.876239 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn"] Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.897990 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k"] Mar 13 20:45:15 crc kubenswrapper[5029]: W0313 20:45:15.899970 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fae77a6_7657_435b_9eaa_46738bd3adff.slice/crio-cd7600dfdef66157a6a080381256ea5a7a0b1567691fc28ad30d86427b17415e WatchSource:0}: Error finding container cd7600dfdef66157a6a080381256ea5a7a0b1567691fc28ad30d86427b17415e: Status 404 returned error can't find the container with id cd7600dfdef66157a6a080381256ea5a7a0b1567691fc28ad30d86427b17415e Mar 13 20:45:15 crc kubenswrapper[5029]: I0313 20:45:15.931932 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.152138 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.152326 5029 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.152421 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert podName:5f05cebc-30a2-43ca-8ecf-31853a8f2600 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:18.152400303 +0000 UTC m=+1078.168482696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" (UID: "5f05cebc-30a2-43ca-8ecf-31853a8f2600") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.221530 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" event={"ID":"9fae77a6-7657-435b-9eaa-46738bd3adff","Type":"ContainerStarted","Data":"cd7600dfdef66157a6a080381256ea5a7a0b1567691fc28ad30d86427b17415e"} Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.222764 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" event={"ID":"5af430c9-929c-4f4b-8a2e-0b346433c966","Type":"ContainerStarted","Data":"971bf3412a3b94201478a0f8e0a47b130402b4442a7dfa87175e7f67f17a85d7"} Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.223974 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" event={"ID":"c78e7c55-5a08-44a3-9ab9-8229d3b63c95","Type":"ContainerStarted","Data":"c62d1e0cbcca7ab5f1a9a02c8cad91176c447e45fdda8105abbf21f1c6444d90"} Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.224962 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" event={"ID":"8572f8c5-5098-41a3-8596-e93818c51912","Type":"ContainerStarted","Data":"94bf52bab394441605644b15082476ba1ff831d79bdb97e25badaa186c1c3475"} Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.226192 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" event={"ID":"62985a1a-96c3-413d-b4ba-1e30082b4252","Type":"ContainerStarted","Data":"3570ff1f47b064d495122aeee7365becd8c67a8a86161496c79478a9da270233"} Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.439310 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.455426 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.455487 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.455626 5029 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.455674 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:18.455659782 +0000 UTC m=+1078.471742175 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "metrics-server-cert" not found Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.455979 5029 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.456005 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:18.455996841 +0000 UTC m=+1078.472079244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "webhook-server-cert" not found Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.463411 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.470581 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.478038 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9"] Mar 13 20:45:16 crc kubenswrapper[5029]: W0313 20:45:16.482814 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod465d67e8_1ca2_4c48_9ea6_5a46f41e4333.slice/crio-c8853c481b4a58f7fbf4d3ac54d90651cd02a43012ec657a9370f4f9031e4f04 WatchSource:0}: Error finding container c8853c481b4a58f7fbf4d3ac54d90651cd02a43012ec657a9370f4f9031e4f04: Status 404 returned error can't find the container with id c8853c481b4a58f7fbf4d3ac54d90651cd02a43012ec657a9370f4f9031e4f04 Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.495245 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-strvq"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.497919 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.511538 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.525142 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.537105 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.547613 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.552303 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps"] Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.565966 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc"] Mar 13 20:45:16 crc kubenswrapper[5029]: W0313 20:45:16.569819 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf55c0eb_db5c_48b7_9b8b_997253cb8510.slice/crio-092a99fd1058c6af4514412f8d59852d5ca8cd57aa76b6e55d17f9a9cb0b5b20 WatchSource:0}: Error finding container 092a99fd1058c6af4514412f8d59852d5ca8cd57aa76b6e55d17f9a9cb0b5b20: Status 404 returned error can't find the container with id 092a99fd1058c6af4514412f8d59852d5ca8cd57aa76b6e55d17f9a9cb0b5b20 Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.580130 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s"] Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.582315 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sgjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lm87s_openstack-operators(4730a688-7219-434b-8ab5-88c3023144e1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.582790 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-st926,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6c4d75f7f9-4ckwc_openstack-operators(1b78339c-69bb-4905-af68-29313b2e2227): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.583533 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" podUID="4730a688-7219-434b-8ab5-88c3023144e1" Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.584967 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" podUID="1b78339c-69bb-4905-af68-29313b2e2227" Mar 13 20:45:16 crc kubenswrapper[5029]: I0313 20:45:16.589145 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz"] Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.597553 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtx5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bc894d9b-qvzqz_openstack-operators(03ada4f5-407f-4ce4-8cdd-b91ba50d6e24): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.598895 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" podUID="03ada4f5-407f-4ce4-8cdd-b91ba50d6e24" Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.598681 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nb2v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-2zjps_openstack-operators(60caa364-7d62-4d19-8de1-6b231b90adb7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 20:45:16 crc kubenswrapper[5029]: E0313 20:45:16.601110 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" podUID="60caa364-7d62-4d19-8de1-6b231b90adb7" Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.249899 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" event={"ID":"0ea96653-f3ad-443c-85cb-27806cc8d02f","Type":"ContainerStarted","Data":"155df3587e2e45bd150969ea2060ef21b93bcc746ec4712dccb7453e56fe8a0b"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.254908 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" event={"ID":"b7d71625-72b5-4359-92ed-1931a3fe6b96","Type":"ContainerStarted","Data":"70abdeed90bd10371f62039451d9799bb01e56882c59d5c4737134b1e860102e"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.272770 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" event={"ID":"0bbae089-e35f-4e2a-98f9-3348cb910e91","Type":"ContainerStarted","Data":"d2a867e1770178b09a32549dfa084797ba7531316bb2ebd1f7f800ca931dc3d4"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.276631 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" event={"ID":"1b78339c-69bb-4905-af68-29313b2e2227","Type":"ContainerStarted","Data":"771f382662dc402e45bcf626348ab52f7a3c02d792474ade036fa5dc39a164f9"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.279252 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" event={"ID":"4730a688-7219-434b-8ab5-88c3023144e1","Type":"ContainerStarted","Data":"1a38b1fecfbb98b1b6dd89893ef8544867221692a33d837e0a71932f04835416"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.280760 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" event={"ID":"54ccdb4e-12ea-481d-b139-21820e7cb430","Type":"ContainerStarted","Data":"1385eace050ee2a9cde1e7c5fe8bb5a6ee66ea147a8e4a6b75b9761dc288d3b0"} Mar 13 20:45:17 crc kubenswrapper[5029]: E0313 20:45:17.281195 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" podUID="1b78339c-69bb-4905-af68-29313b2e2227" Mar 13 20:45:17 crc kubenswrapper[5029]: E0313 20:45:17.281501 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" podUID="4730a688-7219-434b-8ab5-88c3023144e1" Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.285525 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" event={"ID":"2ec9fbff-bc5a-402c-9af7-f5cb8febf410","Type":"ContainerStarted","Data":"efb9b80b0ac65c3de2bd303307db7209230071fce10d91ad63a865976cbcd3d5"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.287423 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" event={"ID":"246360b4-7120-4eb9-b734-cfd22fb35bc6","Type":"ContainerStarted","Data":"7d2712c8c3d616aed1cb7828afe97b44fb70408a181064d735a69957792bb653"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.289414 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" event={"ID":"465d67e8-1ca2-4c48-9ea6-5a46f41e4333","Type":"ContainerStarted","Data":"c8853c481b4a58f7fbf4d3ac54d90651cd02a43012ec657a9370f4f9031e4f04"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.298510 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" event={"ID":"e5ca1347-56a7-4fea-8256-0728bc438b76","Type":"ContainerStarted","Data":"866e827f44a521d991520a5899fa07e2e61657e12ee5339e3b19c19c5005caa5"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.306539 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" event={"ID":"60caa364-7d62-4d19-8de1-6b231b90adb7","Type":"ContainerStarted","Data":"c90562f12b65b777049b4d0cd311474024418a29b85043c90ceaf15775edd852"} Mar 13 20:45:17 crc kubenswrapper[5029]: E0313 20:45:17.309113 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" podUID="60caa364-7d62-4d19-8de1-6b231b90adb7" Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.325394 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" event={"ID":"03ada4f5-407f-4ce4-8cdd-b91ba50d6e24","Type":"ContainerStarted","Data":"1d2413bbe33cb49bd8695f76fb257a014f7049955517b854b9632a0bc3844ec9"} Mar 13 20:45:17 crc kubenswrapper[5029]: E0313 20:45:17.327595 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" podUID="03ada4f5-407f-4ce4-8cdd-b91ba50d6e24" Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.331100 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" event={"ID":"df55c0eb-db5c-48b7-9b8b-997253cb8510","Type":"ContainerStarted","Data":"092a99fd1058c6af4514412f8d59852d5ca8cd57aa76b6e55d17f9a9cb0b5b20"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.340623 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" event={"ID":"ed2536ff-a21c-4134-9acc-6d6dcc2243e4","Type":"ContainerStarted","Data":"fa18b98baedb83fd3d4d770fe08d720613f01bf9acf85f959f1ec77f1a136d84"} Mar 13 20:45:17 crc kubenswrapper[5029]: I0313 20:45:17.794136 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:17 crc kubenswrapper[5029]: E0313 20:45:17.794356 5029 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:17 crc kubenswrapper[5029]: E0313 20:45:17.794959 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert podName:9885322a-6140-443a-9c3a-d21a4674c0f9 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:21.794876326 +0000 UTC m=+1081.810958729 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert") pod "infra-operator-controller-manager-54dc5b8f8d-jwm4t" (UID: "9885322a-6140-443a-9c3a-d21a4674c0f9") : secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:18 crc kubenswrapper[5029]: I0313 20:45:18.208002 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.208168 5029 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.208228 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert podName:5f05cebc-30a2-43ca-8ecf-31853a8f2600 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:22.208211847 +0000 UTC m=+1082.224294250 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" (UID: "5f05cebc-30a2-43ca-8ecf-31853a8f2600") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.375735 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" podUID="1b78339c-69bb-4905-af68-29313b2e2227" Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.381721 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" podUID="03ada4f5-407f-4ce4-8cdd-b91ba50d6e24" Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.382325 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" podUID="4730a688-7219-434b-8ab5-88c3023144e1" Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.382455 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" podUID="60caa364-7d62-4d19-8de1-6b231b90adb7" Mar 13 20:45:18 crc kubenswrapper[5029]: I0313 20:45:18.515106 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:18 crc kubenswrapper[5029]: I0313 20:45:18.515189 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.515376 5029 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.515437 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:22.515418392 +0000 UTC m=+1082.531500795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "metrics-server-cert" not found Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.515825 5029 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 20:45:18 crc kubenswrapper[5029]: E0313 20:45:18.515888 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:22.515877266 +0000 UTC m=+1082.531959659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "webhook-server-cert" not found Mar 13 20:45:21 crc kubenswrapper[5029]: I0313 20:45:21.888735 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:21 crc kubenswrapper[5029]: E0313 20:45:21.888961 5029 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:21 crc kubenswrapper[5029]: E0313 20:45:21.889257 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert podName:9885322a-6140-443a-9c3a-d21a4674c0f9 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:29.889231304 +0000 UTC m=+1089.905313737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert") pod "infra-operator-controller-manager-54dc5b8f8d-jwm4t" (UID: "9885322a-6140-443a-9c3a-d21a4674c0f9") : secret "infra-operator-webhook-server-cert" not found Mar 13 20:45:22 crc kubenswrapper[5029]: I0313 20:45:22.293107 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:22 crc kubenswrapper[5029]: E0313 20:45:22.293326 5029 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:22 crc kubenswrapper[5029]: E0313 20:45:22.293413 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert podName:5f05cebc-30a2-43ca-8ecf-31853a8f2600 nodeName:}" failed. No retries permitted until 2026-03-13 20:45:30.293395314 +0000 UTC m=+1090.309477717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" (UID: "5f05cebc-30a2-43ca-8ecf-31853a8f2600") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 20:45:22 crc kubenswrapper[5029]: I0313 20:45:22.599401 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:22 crc kubenswrapper[5029]: E0313 20:45:22.599574 5029 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 20:45:22 crc kubenswrapper[5029]: E0313 20:45:22.599895 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:30.599828119 +0000 UTC m=+1090.615910582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "metrics-server-cert" not found Mar 13 20:45:22 crc kubenswrapper[5029]: I0313 20:45:22.599987 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:22 crc kubenswrapper[5029]: E0313 20:45:22.600108 5029 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 20:45:22 crc kubenswrapper[5029]: E0313 20:45:22.600139 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:30.600130318 +0000 UTC m=+1090.616212811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "webhook-server-cert" not found Mar 13 20:45:29 crc kubenswrapper[5029]: E0313 20:45:29.679530 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:dd62e104225ea255af5a32828af4c21e1dfb50fbdf35cd41d07d1326f9017a40" Mar 13 20:45:29 crc kubenswrapper[5029]: E0313 20:45:29.686165 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:dd62e104225ea255af5a32828af4c21e1dfb50fbdf35cd41d07d1326f9017a40,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nvwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-57b484b4df-dqb4l_openstack-operators(0ea96653-f3ad-443c-85cb-27806cc8d02f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:29 crc kubenswrapper[5029]: E0313 20:45:29.687343 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" podUID="0ea96653-f3ad-443c-85cb-27806cc8d02f" Mar 13 20:45:29 crc kubenswrapper[5029]: I0313 20:45:29.911440 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:29 crc kubenswrapper[5029]: I0313 20:45:29.919592 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9885322a-6140-443a-9c3a-d21a4674c0f9-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-jwm4t\" (UID: \"9885322a-6140-443a-9c3a-d21a4674c0f9\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:29 crc kubenswrapper[5029]: I0313 20:45:29.949451 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hxd6g" Mar 13 20:45:29 crc kubenswrapper[5029]: I0313 20:45:29.958524 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:30 crc kubenswrapper[5029]: E0313 20:45:30.306096 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721" Mar 13 20:45:30 crc kubenswrapper[5029]: E0313 20:45:30.306282 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-884q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-776c5696bf-wkr5q_openstack-operators(0bbae089-e35f-4e2a-98f9-3348cb910e91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:30 crc kubenswrapper[5029]: E0313 20:45:30.308326 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" podUID="0bbae089-e35f-4e2a-98f9-3348cb910e91" Mar 13 20:45:30 crc kubenswrapper[5029]: I0313 20:45:30.318203 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:30 crc kubenswrapper[5029]: I0313 20:45:30.321864 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f05cebc-30a2-43ca-8ecf-31853a8f2600-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b75gm8n\" (UID: \"5f05cebc-30a2-43ca-8ecf-31853a8f2600\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:30 crc kubenswrapper[5029]: I0313 20:45:30.358316 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gj28z" Mar 13 20:45:30 crc kubenswrapper[5029]: I0313 20:45:30.368028 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:30 crc kubenswrapper[5029]: I0313 20:45:30.622193 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:30 crc kubenswrapper[5029]: I0313 20:45:30.622332 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:30 crc kubenswrapper[5029]: E0313 20:45:30.622482 5029 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 20:45:30 crc kubenswrapper[5029]: E0313 20:45:30.622537 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs podName:c2af04e3-221f-45fc-8a9f-c0f413b9b95c nodeName:}" failed. No retries permitted until 2026-03-13 20:45:46.622519784 +0000 UTC m=+1106.638602187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs") pod "openstack-operator-controller-manager-5698bc49b8-w5dsp" (UID: "c2af04e3-221f-45fc-8a9f-c0f413b9b95c") : secret "webhook-server-cert" not found Mar 13 20:45:30 crc kubenswrapper[5029]: I0313 20:45:30.627541 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-metrics-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:30 crc kubenswrapper[5029]: E0313 20:45:30.668819 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" podUID="0bbae089-e35f-4e2a-98f9-3348cb910e91" Mar 13 20:45:30 crc kubenswrapper[5029]: E0313 20:45:30.669755 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:dd62e104225ea255af5a32828af4c21e1dfb50fbdf35cd41d07d1326f9017a40\\\"\"" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" podUID="0ea96653-f3ad-443c-85cb-27806cc8d02f" Mar 13 20:45:31 crc kubenswrapper[5029]: E0313 20:45:31.581347 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7" Mar 13 20:45:31 crc kubenswrapper[5029]: E0313 20:45:31.581819 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nxdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66d56f6ff4-djjwn_openstack-operators(9fae77a6-7657-435b-9eaa-46738bd3adff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:31 crc kubenswrapper[5029]: E0313 20:45:31.583278 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" podUID="9fae77a6-7657-435b-9eaa-46738bd3adff" Mar 13 20:45:31 crc kubenswrapper[5029]: E0313 20:45:31.670626 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" podUID="9fae77a6-7657-435b-9eaa-46738bd3adff" Mar 13 20:45:31 crc kubenswrapper[5029]: I0313 20:45:31.950531 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:45:31 crc kubenswrapper[5029]: I0313 20:45:31.950607 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:45:32 crc kubenswrapper[5029]: E0313 20:45:32.422926 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f" Mar 13 20:45:32 crc kubenswrapper[5029]: E0313 20:45:32.423170 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwxt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-r6d75_openstack-operators(54ccdb4e-12ea-481d-b139-21820e7cb430): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:32 crc kubenswrapper[5029]: E0313 20:45:32.424416 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" podUID="54ccdb4e-12ea-481d-b139-21820e7cb430" Mar 13 20:45:32 crc kubenswrapper[5029]: E0313 20:45:32.679658 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" podUID="54ccdb4e-12ea-481d-b139-21820e7cb430" Mar 13 20:45:46 crc kubenswrapper[5029]: E0313 20:45:46.137812 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca" Mar 13 20:45:46 crc kubenswrapper[5029]: E0313 20:45:46.138541 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqr7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-684f77d66d-gzknz_openstack-operators(e5ca1347-56a7-4fea-8256-0728bc438b76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:46 crc kubenswrapper[5029]: E0313 20:45:46.139905 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" podUID="e5ca1347-56a7-4fea-8256-0728bc438b76" Mar 13 20:45:46 crc kubenswrapper[5029]: E0313 20:45:46.541085 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff" Mar 13 20:45:46 crc kubenswrapper[5029]: E0313 20:45:46.541319 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wq6h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7f84474648-strvq_openstack-operators(246360b4-7120-4eb9-b734-cfd22fb35bc6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:46 crc kubenswrapper[5029]: E0313 20:45:46.543604 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" podUID="246360b4-7120-4eb9-b734-cfd22fb35bc6" Mar 13 20:45:46 crc kubenswrapper[5029]: I0313 20:45:46.678395 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:46 crc kubenswrapper[5029]: I0313 20:45:46.699758 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2af04e3-221f-45fc-8a9f-c0f413b9b95c-webhook-certs\") pod \"openstack-operator-controller-manager-5698bc49b8-w5dsp\" (UID: \"c2af04e3-221f-45fc-8a9f-c0f413b9b95c\") " pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:46 crc kubenswrapper[5029]: E0313 20:45:46.795650 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" podUID="246360b4-7120-4eb9-b734-cfd22fb35bc6" Mar 13 20:45:46 crc kubenswrapper[5029]: E0313 20:45:46.796013 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" podUID="e5ca1347-56a7-4fea-8256-0728bc438b76" Mar 13 20:45:46 crc kubenswrapper[5029]: I0313 20:45:46.929997 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tkdpm" Mar 13 20:45:46 crc kubenswrapper[5029]: I0313 20:45:46.939423 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:47 crc kubenswrapper[5029]: E0313 20:45:47.141465 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571" Mar 13 20:45:47 crc kubenswrapper[5029]: E0313 20:45:47.141643 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nb2v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-2zjps_openstack-operators(60caa364-7d62-4d19-8de1-6b231b90adb7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:47 crc kubenswrapper[5029]: E0313 20:45:47.142773 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" podUID="60caa364-7d62-4d19-8de1-6b231b90adb7" Mar 13 20:45:47 crc kubenswrapper[5029]: E0313 20:45:47.748977 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" Mar 13 20:45:47 crc kubenswrapper[5029]: E0313 20:45:47.749222 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-st926,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6c4d75f7f9-4ckwc_openstack-operators(1b78339c-69bb-4905-af68-29313b2e2227): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:47 crc kubenswrapper[5029]: E0313 20:45:47.751964 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" podUID="1b78339c-69bb-4905-af68-29313b2e2227" Mar 13 20:45:49 crc kubenswrapper[5029]: E0313 20:45:49.468317 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703" Mar 13 20:45:49 crc kubenswrapper[5029]: E0313 20:45:49.468873 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtx5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bc894d9b-qvzqz_openstack-operators(03ada4f5-407f-4ce4-8cdd-b91ba50d6e24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:49 crc kubenswrapper[5029]: E0313 20:45:49.470053 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" podUID="03ada4f5-407f-4ce4-8cdd-b91ba50d6e24" Mar 13 20:45:49 crc kubenswrapper[5029]: I0313 20:45:49.913684 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t"] Mar 13 20:45:50 crc kubenswrapper[5029]: E0313 20:45:50.080269 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 13 20:45:50 crc kubenswrapper[5029]: E0313 20:45:50.080661 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sgjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lm87s_openstack-operators(4730a688-7219-434b-8ab5-88c3023144e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:45:50 crc kubenswrapper[5029]: E0313 20:45:50.081895 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" podUID="4730a688-7219-434b-8ab5-88c3023144e1" Mar 13 20:45:50 crc kubenswrapper[5029]: W0313 20:45:50.129563 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9885322a_6140_443a_9c3a_d21a4674c0f9.slice/crio-e80a956f0a763c0691e731b4c10e11cec2e90832148ac3472b03d59b423bc80c WatchSource:0}: Error finding container e80a956f0a763c0691e731b4c10e11cec2e90832148ac3472b03d59b423bc80c: Status 404 returned error can't find the container with id e80a956f0a763c0691e731b4c10e11cec2e90832148ac3472b03d59b423bc80c Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.445917 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp"] Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.478305 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n"] Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.822584 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" event={"ID":"8572f8c5-5098-41a3-8596-e93818c51912","Type":"ContainerStarted","Data":"b803ea6474fed78e2d74c2b6b549647c77ffea659d07d4c1912b14bbe46d03f5"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.822987 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.829566 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" event={"ID":"b7d71625-72b5-4359-92ed-1931a3fe6b96","Type":"ContainerStarted","Data":"0f39f2cea540dff7028e233cdc5b72330a6c73c742f18a853f4f5ae955e8612f"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.829742 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.835309 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" event={"ID":"c2af04e3-221f-45fc-8a9f-c0f413b9b95c","Type":"ContainerStarted","Data":"55901320da4b0558b46fe4b64dde7f50a1391d9430146eea05ffd6fa9349a428"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.839063 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" event={"ID":"df55c0eb-db5c-48b7-9b8b-997253cb8510","Type":"ContainerStarted","Data":"605ec21ec6ffd87616ce33832794868db11c565b15d6d5fb7075aa861a1e1ee2"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.839960 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.844622 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" event={"ID":"c78e7c55-5a08-44a3-9ab9-8229d3b63c95","Type":"ContainerStarted","Data":"3131cdc120aebc8b4a6869c3e892b9c2d89531966fcebda265cf838b88baa4b1"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.844957 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.847298 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" event={"ID":"5af430c9-929c-4f4b-8a2e-0b346433c966","Type":"ContainerStarted","Data":"a19bb10fbdb01f23998d33bfc4ee993d5768867d80400d0df259b1ec6a3899d8"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.847791 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.851846 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" event={"ID":"ed2536ff-a21c-4134-9acc-6d6dcc2243e4","Type":"ContainerStarted","Data":"0ae390dff531bafd61d8c1275a3a7c8873aee6fe8615d1a74a747023a0410f2c"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.852419 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.854974 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" event={"ID":"9885322a-6140-443a-9c3a-d21a4674c0f9","Type":"ContainerStarted","Data":"e80a956f0a763c0691e731b4c10e11cec2e90832148ac3472b03d59b423bc80c"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.859379 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" event={"ID":"5f05cebc-30a2-43ca-8ecf-31853a8f2600","Type":"ContainerStarted","Data":"cbdc7ac4aeae6bd04c880f4f026999a1f3f435b09a31ed2577ed768a72fce46e"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.861210 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" event={"ID":"465d67e8-1ca2-4c48-9ea6-5a46f41e4333","Type":"ContainerStarted","Data":"b8996f2330bbe39b3f3a9b7cb3b61164158fd7fc195039dff748152d14463f17"} Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.861916 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.875892 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" podStartSLOduration=6.661237663 podStartE2EDuration="37.875836735s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:15.909942056 +0000 UTC m=+1075.926024459" lastFinishedPulling="2026-03-13 20:45:47.124541128 +0000 UTC m=+1107.140623531" observedRunningTime="2026-03-13 20:45:50.875063655 +0000 UTC m=+1110.891146078" watchObservedRunningTime="2026-03-13 20:45:50.875836735 +0000 UTC m=+1110.891919138" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.928616 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" podStartSLOduration=7.294144628 podStartE2EDuration="37.928587429s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.490079657 +0000 UTC m=+1076.506162060" lastFinishedPulling="2026-03-13 20:45:47.124522458 +0000 UTC m=+1107.140604861" observedRunningTime="2026-03-13 20:45:50.925625718 +0000 UTC m=+1110.941708121" watchObservedRunningTime="2026-03-13 20:45:50.928587429 +0000 UTC m=+1110.944669832" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.930528 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" podStartSLOduration=6.387901475 podStartE2EDuration="36.930522481s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.581974654 +0000 UTC m=+1076.598057067" lastFinishedPulling="2026-03-13 20:45:47.12459566 +0000 UTC m=+1107.140678073" observedRunningTime="2026-03-13 20:45:50.907568297 +0000 UTC m=+1110.923650700" watchObservedRunningTime="2026-03-13 20:45:50.930522481 +0000 UTC m=+1110.946604884" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.956522 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" podStartSLOduration=6.733511267 podStartE2EDuration="37.956496757s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:15.902273108 +0000 UTC m=+1075.918355511" lastFinishedPulling="2026-03-13 20:45:47.125258588 +0000 UTC m=+1107.141341001" observedRunningTime="2026-03-13 20:45:50.946564077 +0000 UTC m=+1110.962646490" watchObservedRunningTime="2026-03-13 20:45:50.956496757 +0000 UTC m=+1110.972579160" Mar 13 20:45:50 crc kubenswrapper[5029]: I0313 20:45:50.977588 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" podStartSLOduration=6.377889795 podStartE2EDuration="37.977561459s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:15.522575492 +0000 UTC m=+1075.538657895" lastFinishedPulling="2026-03-13 20:45:47.122247156 +0000 UTC m=+1107.138329559" observedRunningTime="2026-03-13 20:45:50.967214248 +0000 UTC m=+1110.983296671" watchObservedRunningTime="2026-03-13 20:45:50.977561459 +0000 UTC m=+1110.993643862" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.033866 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" podStartSLOduration=6.4902787459999995 podStartE2EDuration="37.033832287s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.581695146 +0000 UTC m=+1076.597777549" lastFinishedPulling="2026-03-13 20:45:47.125248687 +0000 UTC m=+1107.141331090" observedRunningTime="2026-03-13 20:45:51.033101888 +0000 UTC m=+1111.049184281" watchObservedRunningTime="2026-03-13 20:45:51.033832287 +0000 UTC m=+1111.049914690" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.036043 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" podStartSLOduration=6.438345076 podStartE2EDuration="37.036033297s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.526998321 +0000 UTC m=+1076.543080724" lastFinishedPulling="2026-03-13 20:45:47.124686542 +0000 UTC m=+1107.140768945" observedRunningTime="2026-03-13 20:45:51.008414077 +0000 UTC m=+1111.024496480" watchObservedRunningTime="2026-03-13 20:45:51.036033297 +0000 UTC m=+1111.052115700" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.884889 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" event={"ID":"54ccdb4e-12ea-481d-b139-21820e7cb430","Type":"ContainerStarted","Data":"1f1d3f0371bccc17ce8284e0c70e90268bca775b82a496f4d3f345567f8083c7"} Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.885507 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.888029 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" event={"ID":"c2af04e3-221f-45fc-8a9f-c0f413b9b95c","Type":"ContainerStarted","Data":"6dfe08c1c5523d82b6ba774a756a232894a3e3ce1bae2c7b24a4a3fe1aecc052"} Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.888430 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.890372 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" event={"ID":"62985a1a-96c3-413d-b4ba-1e30082b4252","Type":"ContainerStarted","Data":"a8d26d531f1889bdebd5a9299d3790a843c1a8706e6edcde42be2ab338f104f3"} Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.890718 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.894483 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" event={"ID":"9fae77a6-7657-435b-9eaa-46738bd3adff","Type":"ContainerStarted","Data":"edcd3494f081be094fcf1ff54de50884034d771c99dbd7f1d8c111f85c09786b"} Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.895028 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.896965 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" event={"ID":"0bbae089-e35f-4e2a-98f9-3348cb910e91","Type":"ContainerStarted","Data":"54915dcf4d15d9026b1a38935ec15d326952d4f54cdabc9ed52d7c3880224191"} Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.897394 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.928136 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" event={"ID":"2ec9fbff-bc5a-402c-9af7-f5cb8febf410","Type":"ContainerStarted","Data":"bb12a43fcb5c2cfe8d00b405cead2fb9a6ef9bb688fa41ae4e525516277bb6bd"} Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.929121 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.952798 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" event={"ID":"0ea96653-f3ad-443c-85cb-27806cc8d02f","Type":"ContainerStarted","Data":"aba9dbac6d43dee351f6d0064af91f059e084e9eff31e7b163cd742123eda3b4"} Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.953244 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.968088 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" podStartSLOduration=4.372232274 podStartE2EDuration="37.96806882s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.575084587 +0000 UTC m=+1076.591166990" lastFinishedPulling="2026-03-13 20:45:50.170921133 +0000 UTC m=+1110.187003536" observedRunningTime="2026-03-13 20:45:51.967365361 +0000 UTC m=+1111.983447764" watchObservedRunningTime="2026-03-13 20:45:51.96806882 +0000 UTC m=+1111.984151223" Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.969340 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" event={"ID":"cb6725e8-bfb1-4ae6-884c-d70e86c2e268","Type":"ContainerStarted","Data":"b7040110603247ae2783045453082760a5a405062889e45a05021cd9e448d878"} Mar 13 20:45:51 crc kubenswrapper[5029]: I0313 20:45:51.970663 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" Mar 13 20:45:52 crc kubenswrapper[5029]: I0313 20:45:52.149693 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" podStartSLOduration=38.149671614 podStartE2EDuration="38.149671614s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:45:52.136353001 +0000 UTC m=+1112.152435404" watchObservedRunningTime="2026-03-13 20:45:52.149671614 +0000 UTC m=+1112.165754017" Mar 13 20:45:52 crc kubenswrapper[5029]: I0313 20:45:52.150054 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" podStartSLOduration=4.83544901 podStartE2EDuration="39.150048124s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:15.903716187 +0000 UTC m=+1075.919798590" lastFinishedPulling="2026-03-13 20:45:50.218315301 +0000 UTC m=+1110.234397704" observedRunningTime="2026-03-13 20:45:52.058065484 +0000 UTC m=+1112.074147897" watchObservedRunningTime="2026-03-13 20:45:52.150048124 +0000 UTC m=+1112.166130537" Mar 13 20:45:52 crc kubenswrapper[5029]: I0313 20:45:52.233831 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" podStartSLOduration=7.656116862 podStartE2EDuration="39.233811159s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:15.546977905 +0000 UTC m=+1075.563060308" lastFinishedPulling="2026-03-13 20:45:47.124672202 +0000 UTC m=+1107.140754605" observedRunningTime="2026-03-13 20:45:52.230114888 +0000 UTC m=+1112.246197291" watchObservedRunningTime="2026-03-13 20:45:52.233811159 +0000 UTC m=+1112.249893562" Mar 13 20:45:52 crc kubenswrapper[5029]: I0313 20:45:52.236981 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" podStartSLOduration=5.580760537 podStartE2EDuration="39.236964315s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.529166489 +0000 UTC m=+1076.545248892" lastFinishedPulling="2026-03-13 20:45:50.185370267 +0000 UTC m=+1110.201452670" observedRunningTime="2026-03-13 20:45:52.168055302 +0000 UTC m=+1112.184137705" watchObservedRunningTime="2026-03-13 20:45:52.236964315 +0000 UTC m=+1112.253046728" Mar 13 20:45:52 crc kubenswrapper[5029]: I0313 20:45:52.321295 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" podStartSLOduration=5.703491554 podStartE2EDuration="39.321276286s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.52989914 +0000 UTC m=+1076.545981543" lastFinishedPulling="2026-03-13 20:45:50.147683872 +0000 UTC m=+1110.163766275" observedRunningTime="2026-03-13 20:45:52.280128957 +0000 UTC m=+1112.296211360" watchObservedRunningTime="2026-03-13 20:45:52.321276286 +0000 UTC m=+1112.337358689" Mar 13 20:45:52 crc kubenswrapper[5029]: I0313 20:45:52.322132 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" podStartSLOduration=7.772878212 podStartE2EDuration="38.322126418s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.575470227 +0000 UTC m=+1076.591552630" lastFinishedPulling="2026-03-13 20:45:47.124718433 +0000 UTC m=+1107.140800836" observedRunningTime="2026-03-13 20:45:52.31924968 +0000 UTC m=+1112.335332083" watchObservedRunningTime="2026-03-13 20:45:52.322126418 +0000 UTC m=+1112.338208821" Mar 13 20:45:52 crc kubenswrapper[5029]: I0313 20:45:52.399713 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" podStartSLOduration=7.445227213 podStartE2EDuration="39.399688546s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:15.170152267 +0000 UTC m=+1075.186234670" lastFinishedPulling="2026-03-13 20:45:47.1246136 +0000 UTC m=+1107.140696003" observedRunningTime="2026-03-13 20:45:52.343762126 +0000 UTC m=+1112.359844529" watchObservedRunningTime="2026-03-13 20:45:52.399688546 +0000 UTC m=+1112.415770949" Mar 13 20:45:56 crc kubenswrapper[5029]: I0313 20:45:56.012906 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" event={"ID":"5f05cebc-30a2-43ca-8ecf-31853a8f2600","Type":"ContainerStarted","Data":"debae27164b3cc7bf345b748f81d83580c016c8ca6146c583a9c6b6d1afe501f"} Mar 13 20:45:56 crc kubenswrapper[5029]: I0313 20:45:56.013537 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:45:56 crc kubenswrapper[5029]: I0313 20:45:56.014315 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" event={"ID":"9885322a-6140-443a-9c3a-d21a4674c0f9","Type":"ContainerStarted","Data":"989fe2fc1a9788c4872975b06910fbe8eff1c7057b57bd5a037d3d2d3d8a8463"} Mar 13 20:45:56 crc kubenswrapper[5029]: I0313 20:45:56.014481 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:45:56 crc kubenswrapper[5029]: I0313 20:45:56.041414 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" podStartSLOduration=37.441700389 podStartE2EDuration="42.041382575s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:50.522544037 +0000 UTC m=+1110.538626440" lastFinishedPulling="2026-03-13 20:45:55.122226223 +0000 UTC m=+1115.138308626" observedRunningTime="2026-03-13 20:45:56.037272883 +0000 UTC m=+1116.053355286" watchObservedRunningTime="2026-03-13 20:45:56.041382575 +0000 UTC m=+1116.057464978" Mar 13 20:45:56 crc kubenswrapper[5029]: I0313 20:45:56.061747 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" podStartSLOduration=38.083650471 podStartE2EDuration="43.061728938s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:50.147644361 +0000 UTC m=+1110.163726764" lastFinishedPulling="2026-03-13 20:45:55.125722828 +0000 UTC m=+1115.141805231" observedRunningTime="2026-03-13 20:45:56.060231367 +0000 UTC m=+1116.076313780" watchObservedRunningTime="2026-03-13 20:45:56.061728938 +0000 UTC m=+1116.077811341" Mar 13 20:45:56 crc kubenswrapper[5029]: I0313 20:45:56.945022 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5698bc49b8-w5dsp" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.149534 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557246-hf9hf"] Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.150476 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557246-hf9hf" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.152482 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.153776 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.160434 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557246-hf9hf"] Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.162951 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.209086 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cg9m\" (UniqueName: \"kubernetes.io/projected/487af116-ac18-4881-9db5-7c99f89ac667-kube-api-access-5cg9m\") pod \"auto-csr-approver-29557246-hf9hf\" (UID: \"487af116-ac18-4881-9db5-7c99f89ac667\") " pod="openshift-infra/auto-csr-approver-29557246-hf9hf" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.310328 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cg9m\" (UniqueName: \"kubernetes.io/projected/487af116-ac18-4881-9db5-7c99f89ac667-kube-api-access-5cg9m\") pod \"auto-csr-approver-29557246-hf9hf\" (UID: \"487af116-ac18-4881-9db5-7c99f89ac667\") " pod="openshift-infra/auto-csr-approver-29557246-hf9hf" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.336814 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cg9m\" (UniqueName: \"kubernetes.io/projected/487af116-ac18-4881-9db5-7c99f89ac667-kube-api-access-5cg9m\") pod \"auto-csr-approver-29557246-hf9hf\" (UID: \"487af116-ac18-4881-9db5-7c99f89ac667\") " pod="openshift-infra/auto-csr-approver-29557246-hf9hf" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.375164 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b75gm8n" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.475828 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557246-hf9hf" Mar 13 20:46:00 crc kubenswrapper[5029]: E0313 20:46:00.608343 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" podUID="03ada4f5-407f-4ce4-8cdd-b91ba50d6e24" Mar 13 20:46:00 crc kubenswrapper[5029]: E0313 20:46:00.618926 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" podUID="60caa364-7d62-4d19-8de1-6b231b90adb7" Mar 13 20:46:00 crc kubenswrapper[5029]: E0313 20:46:00.629867 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" podUID="1b78339c-69bb-4905-af68-29313b2e2227" Mar 13 20:46:00 crc kubenswrapper[5029]: I0313 20:46:00.925457 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557246-hf9hf"] Mar 13 20:46:01 crc kubenswrapper[5029]: I0313 20:46:01.045891 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557246-hf9hf" event={"ID":"487af116-ac18-4881-9db5-7c99f89ac667","Type":"ContainerStarted","Data":"11cdec3d3c65ec86408b55a2a738e2988d428ef37f4f7d766786f2821b4a5c38"} Mar 13 20:46:01 crc kubenswrapper[5029]: I0313 20:46:01.950274 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:46:01 crc kubenswrapper[5029]: I0313 20:46:01.950670 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:46:02 crc kubenswrapper[5029]: I0313 20:46:02.054494 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" event={"ID":"246360b4-7120-4eb9-b734-cfd22fb35bc6","Type":"ContainerStarted","Data":"0cfd24a708aee68879f230c5d6ff41f7989d883c86536dc37473bb7f1c8b2518"} Mar 13 20:46:02 crc kubenswrapper[5029]: I0313 20:46:02.054713 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" Mar 13 20:46:02 crc kubenswrapper[5029]: I0313 20:46:02.056427 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" event={"ID":"e5ca1347-56a7-4fea-8256-0728bc438b76","Type":"ContainerStarted","Data":"e0a23ea3a7a628bac1c3ac92f8dcd19fb9bc4eff20e07a3ce74adf1f8b8728aa"} Mar 13 20:46:02 crc kubenswrapper[5029]: I0313 20:46:02.056610 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" Mar 13 20:46:02 crc kubenswrapper[5029]: I0313 20:46:02.079939 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" podStartSLOduration=4.365758418 podStartE2EDuration="49.079915062s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.529593891 +0000 UTC m=+1076.545676304" lastFinishedPulling="2026-03-13 20:46:01.243750545 +0000 UTC m=+1121.259832948" observedRunningTime="2026-03-13 20:46:02.076620553 +0000 UTC m=+1122.092702976" watchObservedRunningTime="2026-03-13 20:46:02.079915062 +0000 UTC m=+1122.095997465" Mar 13 20:46:02 crc kubenswrapper[5029]: I0313 20:46:02.094677 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" podStartSLOduration=4.389154444 podStartE2EDuration="49.094660313s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.535247205 +0000 UTC m=+1076.551329608" lastFinishedPulling="2026-03-13 20:46:01.240753064 +0000 UTC m=+1121.256835477" observedRunningTime="2026-03-13 20:46:02.091263841 +0000 UTC m=+1122.107346254" watchObservedRunningTime="2026-03-13 20:46:02.094660313 +0000 UTC m=+1122.110742716" Mar 13 20:46:02 crc kubenswrapper[5029]: E0313 20:46:02.601432 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" podUID="4730a688-7219-434b-8ab5-88c3023144e1" Mar 13 20:46:03 crc kubenswrapper[5029]: I0313 20:46:03.067358 5029 generic.go:334] "Generic (PLEG): container finished" podID="487af116-ac18-4881-9db5-7c99f89ac667" containerID="f086fcff288e6521a2e4ee84260004ee2c15f79ab50713a5e54854ece9a46fb2" exitCode=0 Mar 13 20:46:03 crc kubenswrapper[5029]: I0313 20:46:03.067512 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557246-hf9hf" event={"ID":"487af116-ac18-4881-9db5-7c99f89ac667","Type":"ContainerDied","Data":"f086fcff288e6521a2e4ee84260004ee2c15f79ab50713a5e54854ece9a46fb2"} Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.049355 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-d47688694-cmqrn" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.057076 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-wss56" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.098566 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-djjwn" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.121694 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-jtfsz" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.155621 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-8nx6k" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.188440 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-pthcv" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.415088 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557246-hf9hf" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.491031 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-57b484b4df-dqb4l" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.529680 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-5stmj" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.540050 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-wkr5q" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.582826 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cg9m\" (UniqueName: \"kubernetes.io/projected/487af116-ac18-4881-9db5-7c99f89ac667-kube-api-access-5cg9m\") pod \"487af116-ac18-4881-9db5-7c99f89ac667\" (UID: \"487af116-ac18-4881-9db5-7c99f89ac667\") " Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.597142 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487af116-ac18-4881-9db5-7c99f89ac667-kube-api-access-5cg9m" (OuterVolumeSpecName: "kube-api-access-5cg9m") pod "487af116-ac18-4881-9db5-7c99f89ac667" (UID: "487af116-ac18-4881-9db5-7c99f89ac667"). InnerVolumeSpecName "kube-api-access-5cg9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.638487 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-r6d75" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.685714 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cg9m\" (UniqueName: \"kubernetes.io/projected/487af116-ac18-4881-9db5-7c99f89ac667-kube-api-access-5cg9m\") on node \"crc\" DevicePath \"\"" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.701106 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-h2xd9" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.763522 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-p2j7s" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.910580 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-przwp" Mar 13 20:46:04 crc kubenswrapper[5029]: I0313 20:46:04.927832 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-4rbtk" Mar 13 20:46:05 crc kubenswrapper[5029]: I0313 20:46:05.084560 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557246-hf9hf" event={"ID":"487af116-ac18-4881-9db5-7c99f89ac667","Type":"ContainerDied","Data":"11cdec3d3c65ec86408b55a2a738e2988d428ef37f4f7d766786f2821b4a5c38"} Mar 13 20:46:05 crc kubenswrapper[5029]: I0313 20:46:05.084624 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11cdec3d3c65ec86408b55a2a738e2988d428ef37f4f7d766786f2821b4a5c38" Mar 13 20:46:05 crc kubenswrapper[5029]: I0313 20:46:05.084728 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557246-hf9hf" Mar 13 20:46:05 crc kubenswrapper[5029]: I0313 20:46:05.476129 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557240-x4n6h"] Mar 13 20:46:05 crc kubenswrapper[5029]: I0313 20:46:05.483704 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557240-x4n6h"] Mar 13 20:46:06 crc kubenswrapper[5029]: I0313 20:46:06.613205 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac67ec2-0066-4674-9b71-5e10b6385b42" path="/var/lib/kubelet/pods/aac67ec2-0066-4674-9b71-5e10b6385b42/volumes" Mar 13 20:46:09 crc kubenswrapper[5029]: I0313 20:46:09.965737 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-jwm4t" Mar 13 20:46:13 crc kubenswrapper[5029]: I0313 20:46:13.141567 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" event={"ID":"03ada4f5-407f-4ce4-8cdd-b91ba50d6e24","Type":"ContainerStarted","Data":"e7bb3e410a2f0fb8f8ad715eefde49c1f28904fbada4390a772a4a6eeabe0d14"} Mar 13 20:46:13 crc kubenswrapper[5029]: I0313 20:46:13.142549 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" Mar 13 20:46:13 crc kubenswrapper[5029]: I0313 20:46:13.166940 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" podStartSLOduration=4.752313581 podStartE2EDuration="1m0.166920508s" podCreationTimestamp="2026-03-13 20:45:13 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.597412124 +0000 UTC m=+1076.613494527" lastFinishedPulling="2026-03-13 20:46:12.012019051 +0000 UTC m=+1132.028101454" observedRunningTime="2026-03-13 20:46:13.160451582 +0000 UTC m=+1133.176533985" watchObservedRunningTime="2026-03-13 20:46:13.166920508 +0000 UTC m=+1133.183002901" Mar 13 20:46:14 crc kubenswrapper[5029]: I0313 20:46:14.473480 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-gzknz" Mar 13 20:46:14 crc kubenswrapper[5029]: I0313 20:46:14.555126 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f84474648-strvq" Mar 13 20:46:15 crc kubenswrapper[5029]: I0313 20:46:15.153461 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" event={"ID":"60caa364-7d62-4d19-8de1-6b231b90adb7","Type":"ContainerStarted","Data":"ec938aff9f9215b1d8230f6673a71f5e876487d93daeb640b681b44a388cbe40"} Mar 13 20:46:15 crc kubenswrapper[5029]: I0313 20:46:15.153708 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" Mar 13 20:46:15 crc kubenswrapper[5029]: I0313 20:46:15.178907 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" podStartSLOduration=3.753659096 podStartE2EDuration="1m1.178827668s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.598517553 +0000 UTC m=+1076.614599956" lastFinishedPulling="2026-03-13 20:46:14.023686125 +0000 UTC m=+1134.039768528" observedRunningTime="2026-03-13 20:46:15.16969074 +0000 UTC m=+1135.185773193" watchObservedRunningTime="2026-03-13 20:46:15.178827668 +0000 UTC m=+1135.194910111" Mar 13 20:46:17 crc kubenswrapper[5029]: I0313 20:46:17.165936 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" event={"ID":"1b78339c-69bb-4905-af68-29313b2e2227","Type":"ContainerStarted","Data":"ee27515639dc022f8797d359016e04bc9bd66929a3bff9d806f5ccf18c21eb72"} Mar 13 20:46:17 crc kubenswrapper[5029]: I0313 20:46:17.166447 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" Mar 13 20:46:17 crc kubenswrapper[5029]: I0313 20:46:17.189285 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" podStartSLOduration=3.728350308 podStartE2EDuration="1m3.189244298s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.582671462 +0000 UTC m=+1076.598753855" lastFinishedPulling="2026-03-13 20:46:16.043565442 +0000 UTC m=+1136.059647845" observedRunningTime="2026-03-13 20:46:17.188792976 +0000 UTC m=+1137.204875379" watchObservedRunningTime="2026-03-13 20:46:17.189244298 +0000 UTC m=+1137.205326711" Mar 13 20:46:18 crc kubenswrapper[5029]: I0313 20:46:18.173210 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" event={"ID":"4730a688-7219-434b-8ab5-88c3023144e1","Type":"ContainerStarted","Data":"c39070fafe80bc5effe00687de4737224358bd210b98caff0beb8b29a49c6d30"} Mar 13 20:46:18 crc kubenswrapper[5029]: I0313 20:46:18.195116 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm87s" podStartSLOduration=3.579988318 podStartE2EDuration="1m4.195095876s" podCreationTimestamp="2026-03-13 20:45:14 +0000 UTC" firstStartedPulling="2026-03-13 20:45:16.582164919 +0000 UTC m=+1076.598247322" lastFinishedPulling="2026-03-13 20:46:17.197272477 +0000 UTC m=+1137.213354880" observedRunningTime="2026-03-13 20:46:18.188413075 +0000 UTC m=+1138.204495498" watchObservedRunningTime="2026-03-13 20:46:18.195095876 +0000 UTC m=+1138.211178279" Mar 13 20:46:24 crc kubenswrapper[5029]: I0313 20:46:24.429536 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-qvzqz" Mar 13 20:46:24 crc kubenswrapper[5029]: I0313 20:46:24.615630 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-2zjps" Mar 13 20:46:25 crc kubenswrapper[5029]: I0313 20:46:25.073121 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-4ckwc" Mar 13 20:46:31 crc kubenswrapper[5029]: I0313 20:46:31.951112 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:46:31 crc kubenswrapper[5029]: I0313 20:46:31.951475 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:46:31 crc kubenswrapper[5029]: I0313 20:46:31.951531 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:46:31 crc kubenswrapper[5029]: I0313 20:46:31.952217 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"098cf3f8300a8686d628684223c880e3efcc22b58099225528ac37cb2f271026"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 20:46:31 crc kubenswrapper[5029]: I0313 20:46:31.952281 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://098cf3f8300a8686d628684223c880e3efcc22b58099225528ac37cb2f271026" gracePeriod=600 Mar 13 20:46:32 crc kubenswrapper[5029]: I0313 20:46:32.278437 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="098cf3f8300a8686d628684223c880e3efcc22b58099225528ac37cb2f271026" exitCode=0 Mar 13 20:46:32 crc kubenswrapper[5029]: I0313 20:46:32.278691 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"098cf3f8300a8686d628684223c880e3efcc22b58099225528ac37cb2f271026"} Mar 13 20:46:32 crc kubenswrapper[5029]: I0313 20:46:32.278824 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"fc08a3f0bf62f626b96edf0adf5dbb9a0493ba7c49c9be50ad8bce4dd83f3787"} Mar 13 20:46:32 crc kubenswrapper[5029]: I0313 20:46:32.278872 5029 scope.go:117] "RemoveContainer" containerID="4bbea3ecaf26f1609521229697004331cac38ad489818c6871ecf93d481648d2" Mar 13 20:46:34 crc kubenswrapper[5029]: I0313 20:46:34.901918 5029 scope.go:117] "RemoveContainer" containerID="8a8f29775510291207dc2c8e4ec5c2682a4a9da2712a86033ad7d78fdc67714c" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.419718 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t85tw"] Mar 13 20:46:42 crc kubenswrapper[5029]: E0313 20:46:42.421241 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487af116-ac18-4881-9db5-7c99f89ac667" containerName="oc" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.421261 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="487af116-ac18-4881-9db5-7c99f89ac667" containerName="oc" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.421465 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="487af116-ac18-4881-9db5-7c99f89ac667" containerName="oc" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.422435 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.424828 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mrncb" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.429067 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.449880 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t85tw"] Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.497882 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jj529"] Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.499742 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.518579 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.536474 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jj529"] Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.553748 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.553826 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65tqf\" (UniqueName: \"kubernetes.io/projected/ecb3bccf-4801-4067-be1d-e0c655a754f7-kube-api-access-65tqf\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.554440 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-config\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.554498 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d97m9\" (UniqueName: \"kubernetes.io/projected/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-kube-api-access-d97m9\") pod \"dnsmasq-dns-675f4bcbfc-t85tw\" (UID: \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.554532 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-config\") pod \"dnsmasq-dns-675f4bcbfc-t85tw\" (UID: \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.656676 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.656738 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65tqf\" (UniqueName: \"kubernetes.io/projected/ecb3bccf-4801-4067-be1d-e0c655a754f7-kube-api-access-65tqf\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.657240 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-config\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.657349 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d97m9\" (UniqueName: \"kubernetes.io/projected/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-kube-api-access-d97m9\") pod \"dnsmasq-dns-675f4bcbfc-t85tw\" (UID: \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.657424 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-config\") pod \"dnsmasq-dns-675f4bcbfc-t85tw\" (UID: \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.657613 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.658208 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-config\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.658522 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-config\") pod \"dnsmasq-dns-675f4bcbfc-t85tw\" (UID: \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.680879 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65tqf\" (UniqueName: \"kubernetes.io/projected/ecb3bccf-4801-4067-be1d-e0c655a754f7-kube-api-access-65tqf\") pod \"dnsmasq-dns-78dd6ddcc-jj529\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.692017 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d97m9\" (UniqueName: \"kubernetes.io/projected/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-kube-api-access-d97m9\") pod \"dnsmasq-dns-675f4bcbfc-t85tw\" (UID: \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.753110 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:46:42 crc kubenswrapper[5029]: I0313 20:46:42.841350 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:46:43 crc kubenswrapper[5029]: I0313 20:46:43.300079 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t85tw"] Mar 13 20:46:43 crc kubenswrapper[5029]: W0313 20:46:43.310434 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd3b70f_2cb3_40d7_89bb_baa0b20c807b.slice/crio-71a5443c0bc2aaa04617a58d51f08ef9a0fb7849653792833d0cd1d49d9cbb5e WatchSource:0}: Error finding container 71a5443c0bc2aaa04617a58d51f08ef9a0fb7849653792833d0cd1d49d9cbb5e: Status 404 returned error can't find the container with id 71a5443c0bc2aaa04617a58d51f08ef9a0fb7849653792833d0cd1d49d9cbb5e Mar 13 20:46:43 crc kubenswrapper[5029]: I0313 20:46:43.368959 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" event={"ID":"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b","Type":"ContainerStarted","Data":"71a5443c0bc2aaa04617a58d51f08ef9a0fb7849653792833d0cd1d49d9cbb5e"} Mar 13 20:46:43 crc kubenswrapper[5029]: I0313 20:46:43.421303 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jj529"] Mar 13 20:46:43 crc kubenswrapper[5029]: W0313 20:46:43.425549 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecb3bccf_4801_4067_be1d_e0c655a754f7.slice/crio-7b3b70bc890dafa13c416aacdf0cecc653c995a50604689afe15907582a53512 WatchSource:0}: Error finding container 7b3b70bc890dafa13c416aacdf0cecc653c995a50604689afe15907582a53512: Status 404 returned error can't find the container with id 7b3b70bc890dafa13c416aacdf0cecc653c995a50604689afe15907582a53512 Mar 13 20:46:44 crc kubenswrapper[5029]: I0313 20:46:44.387080 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" event={"ID":"ecb3bccf-4801-4067-be1d-e0c655a754f7","Type":"ContainerStarted","Data":"7b3b70bc890dafa13c416aacdf0cecc653c995a50604689afe15907582a53512"} Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.497147 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t85tw"] Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.529953 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-5m8zg"] Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.531513 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.539067 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-5m8zg"] Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.675034 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.675100 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-config\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.675126 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4gt\" (UniqueName: \"kubernetes.io/projected/c106c874-14d7-4801-8e74-4c0a0288a3f0-kube-api-access-kb4gt\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.783420 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.783478 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-config\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.783505 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb4gt\" (UniqueName: \"kubernetes.io/projected/c106c874-14d7-4801-8e74-4c0a0288a3f0-kube-api-access-kb4gt\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.788439 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-config\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.791215 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jj529"] Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.802405 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.830394 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z48jh"] Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.832976 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.864078 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z48jh"] Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.888721 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb4gt\" (UniqueName: \"kubernetes.io/projected/c106c874-14d7-4801-8e74-4c0a0288a3f0-kube-api-access-kb4gt\") pod \"dnsmasq-dns-5ccc8479f9-5m8zg\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.991187 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.991275 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9rlb\" (UniqueName: \"kubernetes.io/projected/d0eda583-786d-49d6-b520-b8b82dbe6f6f-kube-api-access-s9rlb\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:45 crc kubenswrapper[5029]: I0313 20:46:45.991333 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-config\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.099960 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-config\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.100082 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.100126 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9rlb\" (UniqueName: \"kubernetes.io/projected/d0eda583-786d-49d6-b520-b8b82dbe6f6f-kube-api-access-s9rlb\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.101564 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.101597 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-config\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.136220 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9rlb\" (UniqueName: \"kubernetes.io/projected/d0eda583-786d-49d6-b520-b8b82dbe6f6f-kube-api-access-s9rlb\") pod \"dnsmasq-dns-57d769cc4f-z48jh\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.158670 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.221803 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.670774 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.672444 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.675584 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.675628 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.675584 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.675828 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.676013 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rlc4s" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.676173 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.680547 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.702236 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.812650 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.812712 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/016118a1-8825-4373-a487-2fa17c45488a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.812765 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmsvn\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-kube-api-access-kmsvn\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.812805 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.812845 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.812896 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.812924 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.812948 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.813006 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.813042 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.813065 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/016118a1-8825-4373-a487-2fa17c45488a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.838809 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z48jh"] Mar 13 20:46:46 crc kubenswrapper[5029]: W0313 20:46:46.840757 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0eda583_786d_49d6_b520_b8b82dbe6f6f.slice/crio-4d32faa27b515e6cdbe0e3094c83ba0723a01a5adc7b09686471fee0edf157ea WatchSource:0}: Error finding container 4d32faa27b515e6cdbe0e3094c83ba0723a01a5adc7b09686471fee0edf157ea: Status 404 returned error can't find the container with id 4d32faa27b515e6cdbe0e3094c83ba0723a01a5adc7b09686471fee0edf157ea Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.918084 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.918146 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.918206 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.918267 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.918363 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.918973 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.920083 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.920132 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/016118a1-8825-4373-a487-2fa17c45488a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.920223 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.920267 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/016118a1-8825-4373-a487-2fa17c45488a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.920307 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmsvn\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-kube-api-access-kmsvn\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.920365 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.921434 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.922077 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.922169 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.924063 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.925259 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.930443 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/016118a1-8825-4373-a487-2fa17c45488a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.933768 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.936113 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-5m8zg"] Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.938415 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.942012 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/016118a1-8825-4373-a487-2fa17c45488a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.952943 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmsvn\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-kube-api-access-kmsvn\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:46 crc kubenswrapper[5029]: I0313 20:46:46.956661 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.022303 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.033971 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.035645 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.039242 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.039545 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.039720 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l76h4" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.040013 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.040072 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.040308 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.040455 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.052777 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123024 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123087 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123121 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123284 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123366 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123427 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123476 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123623 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123768 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123875 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.123958 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9fc\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-kube-api-access-jt9fc\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.225927 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226360 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226423 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226506 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226540 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226608 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226668 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226710 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226782 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226838 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.226892 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9fc\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-kube-api-access-jt9fc\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.227373 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.227974 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.228096 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.231370 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.232008 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.232818 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.236450 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.239136 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.241290 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.242635 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.245824 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9fc\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-kube-api-access-jt9fc\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.264113 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.395759 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.474945 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" event={"ID":"d0eda583-786d-49d6-b520-b8b82dbe6f6f","Type":"ContainerStarted","Data":"4d32faa27b515e6cdbe0e3094c83ba0723a01a5adc7b09686471fee0edf157ea"} Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.478537 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" event={"ID":"c106c874-14d7-4801-8e74-4c0a0288a3f0","Type":"ContainerStarted","Data":"e64e196a555d513618321335dc9543114b136415957ce94d23fc8f8dde694010"} Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.555810 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:46:47 crc kubenswrapper[5029]: W0313 20:46:47.572403 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016118a1_8825_4373_a487_2fa17c45488a.slice/crio-784fb70e9547df5c0a14e75fb27176a7997c788eaca25a0d0009cbd82238d0be WatchSource:0}: Error finding container 784fb70e9547df5c0a14e75fb27176a7997c788eaca25a0d0009cbd82238d0be: Status 404 returned error can't find the container with id 784fb70e9547df5c0a14e75fb27176a7997c788eaca25a0d0009cbd82238d0be Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.791464 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.792778 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.795065 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.795953 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.796935 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-kvgzj" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.796997 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.821489 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.823900 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.955065 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe158656-b08f-4364-832e-f19c0f46d845-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.955111 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe158656-b08f-4364-832e-f19c0f46d845-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.955154 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.955175 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.955195 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97ng9\" (UniqueName: \"kubernetes.io/projected/fe158656-b08f-4364-832e-f19c0f46d845-kube-api-access-97ng9\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.955221 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.955320 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe158656-b08f-4364-832e-f19c0f46d845-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:47 crc kubenswrapper[5029]: I0313 20:46:47.955343 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.084105 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97ng9\" (UniqueName: \"kubernetes.io/projected/fe158656-b08f-4364-832e-f19c0f46d845-kube-api-access-97ng9\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.084167 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.084258 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe158656-b08f-4364-832e-f19c0f46d845-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.084294 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.084774 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe158656-b08f-4364-832e-f19c0f46d845-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.084808 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe158656-b08f-4364-832e-f19c0f46d845-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.084898 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.084925 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.087027 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.087275 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe158656-b08f-4364-832e-f19c0f46d845-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.087732 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.091599 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe158656-b08f-4364-832e-f19c0f46d845-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.101166 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.108874 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe158656-b08f-4364-832e-f19c0f46d845-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.109654 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe158656-b08f-4364-832e-f19c0f46d845-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.123650 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97ng9\" (UniqueName: \"kubernetes.io/projected/fe158656-b08f-4364-832e-f19c0f46d845-kube-api-access-97ng9\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.172337 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"fe158656-b08f-4364-832e-f19c0f46d845\") " pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.252294 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.431701 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.541051 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ff0edef-42cf-4ba2-b170-87cfdd6deefb","Type":"ContainerStarted","Data":"47f73fff5aa67dbf2db4fa386878b798427d8b02884e7bfdb049f2d525bc9c0a"} Mar 13 20:46:48 crc kubenswrapper[5029]: I0313 20:46:48.545275 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"016118a1-8825-4373-a487-2fa17c45488a","Type":"ContainerStarted","Data":"784fb70e9547df5c0a14e75fb27176a7997c788eaca25a0d0009cbd82238d0be"} Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.147958 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.159241 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.161250 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.165028 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.165284 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.165759 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rcpmz" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.165865 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.171134 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.493440 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.494770 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.496103 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.496162 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/97961996-b234-441c-ba7c-2c479dfae7f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.496185 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/97961996-b234-441c-ba7c-2c479dfae7f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.496199 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97961996-b234-441c-ba7c-2c479dfae7f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.496223 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.496242 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2twc\" (UniqueName: \"kubernetes.io/projected/97961996-b234-441c-ba7c-2c479dfae7f4-kube-api-access-x2twc\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.496266 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.496311 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.499176 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.499415 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-54btn" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.499534 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.504975 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598563 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97961996-b234-441c-ba7c-2c479dfae7f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598610 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/97961996-b234-441c-ba7c-2c479dfae7f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598644 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598675 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10b7646b-bd89-43c4-8fa2-2d28c1327c65-combined-ca-bundle\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598709 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2twc\" (UniqueName: \"kubernetes.io/projected/97961996-b234-441c-ba7c-2c479dfae7f4-kube-api-access-x2twc\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598741 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598763 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10b7646b-bd89-43c4-8fa2-2d28c1327c65-config-data\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598814 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598880 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10b7646b-bd89-43c4-8fa2-2d28c1327c65-kolla-config\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598921 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb674\" (UniqueName: \"kubernetes.io/projected/10b7646b-bd89-43c4-8fa2-2d28c1327c65-kube-api-access-wb674\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598958 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.598985 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b7646b-bd89-43c4-8fa2-2d28c1327c65-memcached-tls-certs\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.599021 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/97961996-b234-441c-ba7c-2c479dfae7f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.599545 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/97961996-b234-441c-ba7c-2c479dfae7f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.600718 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.602378 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.602608 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.603662 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/97961996-b234-441c-ba7c-2c479dfae7f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.608729 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97961996-b234-441c-ba7c-2c479dfae7f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.610040 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/97961996-b234-441c-ba7c-2c479dfae7f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.638043 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.701699 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb674\" (UniqueName: \"kubernetes.io/projected/10b7646b-bd89-43c4-8fa2-2d28c1327c65-kube-api-access-wb674\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.701783 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b7646b-bd89-43c4-8fa2-2d28c1327c65-memcached-tls-certs\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.701865 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10b7646b-bd89-43c4-8fa2-2d28c1327c65-combined-ca-bundle\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.701966 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10b7646b-bd89-43c4-8fa2-2d28c1327c65-config-data\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.702094 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10b7646b-bd89-43c4-8fa2-2d28c1327c65-kolla-config\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.703037 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10b7646b-bd89-43c4-8fa2-2d28c1327c65-kolla-config\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.705252 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10b7646b-bd89-43c4-8fa2-2d28c1327c65-config-data\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.707226 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10b7646b-bd89-43c4-8fa2-2d28c1327c65-combined-ca-bundle\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.707708 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2twc\" (UniqueName: \"kubernetes.io/projected/97961996-b234-441c-ba7c-2c479dfae7f4-kube-api-access-x2twc\") pod \"openstack-cell1-galera-0\" (UID: \"97961996-b234-441c-ba7c-2c479dfae7f4\") " pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.710863 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/10b7646b-bd89-43c4-8fa2-2d28c1327c65-memcached-tls-certs\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.769534 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb674\" (UniqueName: \"kubernetes.io/projected/10b7646b-bd89-43c4-8fa2-2d28c1327c65-kube-api-access-wb674\") pod \"memcached-0\" (UID: \"10b7646b-bd89-43c4-8fa2-2d28c1327c65\") " pod="openstack/memcached-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.782518 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 13 20:46:49 crc kubenswrapper[5029]: I0313 20:46:49.835875 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 13 20:46:52 crc kubenswrapper[5029]: I0313 20:46:52.105371 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:46:52 crc kubenswrapper[5029]: I0313 20:46:52.116041 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 20:46:52 crc kubenswrapper[5029]: I0313 20:46:52.116412 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:46:52 crc kubenswrapper[5029]: I0313 20:46:52.120483 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zkqgv" Mar 13 20:46:52 crc kubenswrapper[5029]: I0313 20:46:52.276969 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxklx\" (UniqueName: \"kubernetes.io/projected/ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4-kube-api-access-mxklx\") pod \"kube-state-metrics-0\" (UID: \"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4\") " pod="openstack/kube-state-metrics-0" Mar 13 20:46:52 crc kubenswrapper[5029]: I0313 20:46:52.378442 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxklx\" (UniqueName: \"kubernetes.io/projected/ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4-kube-api-access-mxklx\") pod \"kube-state-metrics-0\" (UID: \"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4\") " pod="openstack/kube-state-metrics-0" Mar 13 20:46:52 crc kubenswrapper[5029]: I0313 20:46:52.515334 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxklx\" (UniqueName: \"kubernetes.io/projected/ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4-kube-api-access-mxklx\") pod \"kube-state-metrics-0\" (UID: \"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4\") " pod="openstack/kube-state-metrics-0" Mar 13 20:46:52 crc kubenswrapper[5029]: I0313 20:46:52.751087 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.154469 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xvrv7"] Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.156178 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.171200 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.171462 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-mhxr7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.171625 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.180309 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xvrv7"] Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.219811 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-log-ovn\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.219884 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-run-ovn\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.219927 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09599f34-8760-4612-9d50-925aeb8134b4-combined-ca-bundle\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.219966 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-run\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.219984 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9svf\" (UniqueName: \"kubernetes.io/projected/09599f34-8760-4612-9d50-925aeb8134b4-kube-api-access-v9svf\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.220003 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/09599f34-8760-4612-9d50-925aeb8134b4-ovn-controller-tls-certs\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.220027 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09599f34-8760-4612-9d50-925aeb8134b4-scripts\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.238290 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bj9ld"] Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.240368 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.271059 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bj9ld"] Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321330 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-run-ovn\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321720 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-run\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321767 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-etc-ovs\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321811 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09599f34-8760-4612-9d50-925aeb8134b4-combined-ca-bundle\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321907 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-run\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321936 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9svf\" (UniqueName: \"kubernetes.io/projected/09599f34-8760-4612-9d50-925aeb8134b4-kube-api-access-v9svf\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321956 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4389075-f837-43e3-acc4-b577cdf1f05c-scripts\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321976 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/09599f34-8760-4612-9d50-925aeb8134b4-ovn-controller-tls-certs\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.321998 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb74h\" (UniqueName: \"kubernetes.io/projected/c4389075-f837-43e3-acc4-b577cdf1f05c-kube-api-access-wb74h\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.322005 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-run-ovn\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.322022 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09599f34-8760-4612-9d50-925aeb8134b4-scripts\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.322039 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-log\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.322086 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-lib\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.322108 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-log-ovn\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.322114 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-run\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.322416 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/09599f34-8760-4612-9d50-925aeb8134b4-var-log-ovn\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.324548 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09599f34-8760-4612-9d50-925aeb8134b4-scripts\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.328073 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/09599f34-8760-4612-9d50-925aeb8134b4-ovn-controller-tls-certs\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.338468 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09599f34-8760-4612-9d50-925aeb8134b4-combined-ca-bundle\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.344247 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9svf\" (UniqueName: \"kubernetes.io/projected/09599f34-8760-4612-9d50-925aeb8134b4-kube-api-access-v9svf\") pod \"ovn-controller-xvrv7\" (UID: \"09599f34-8760-4612-9d50-925aeb8134b4\") " pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424166 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-run\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424224 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-etc-ovs\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424281 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4389075-f837-43e3-acc4-b577cdf1f05c-scripts\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424312 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb74h\" (UniqueName: \"kubernetes.io/projected/c4389075-f837-43e3-acc4-b577cdf1f05c-kube-api-access-wb74h\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424335 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-log\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424369 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-lib\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424603 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-lib\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424658 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-run\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.424760 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-etc-ovs\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.425803 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c4389075-f837-43e3-acc4-b577cdf1f05c-var-log\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.438686 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4389075-f837-43e3-acc4-b577cdf1f05c-scripts\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.461519 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb74h\" (UniqueName: \"kubernetes.io/projected/c4389075-f837-43e3-acc4-b577cdf1f05c-kube-api-access-wb74h\") pod \"ovn-controller-ovs-bj9ld\" (UID: \"c4389075-f837-43e3-acc4-b577cdf1f05c\") " pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.487841 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xvrv7" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.559540 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.644040 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.648189 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.651006 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.651011 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-whw26" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.651070 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.654342 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.654462 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.654533 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.729685 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/044b4140-6d50-42d6-893a-2f35ff0bc7b3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.729750 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.729788 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.729898 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/044b4140-6d50-42d6-893a-2f35ff0bc7b3-config\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.729937 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.729977 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.730017 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/044b4140-6d50-42d6-893a-2f35ff0bc7b3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.730077 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrw26\" (UniqueName: \"kubernetes.io/projected/044b4140-6d50-42d6-893a-2f35ff0bc7b3-kube-api-access-xrw26\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832074 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/044b4140-6d50-42d6-893a-2f35ff0bc7b3-config\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832138 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832182 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832219 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/044b4140-6d50-42d6-893a-2f35ff0bc7b3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832243 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrw26\" (UniqueName: \"kubernetes.io/projected/044b4140-6d50-42d6-893a-2f35ff0bc7b3-kube-api-access-xrw26\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832293 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/044b4140-6d50-42d6-893a-2f35ff0bc7b3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832329 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832357 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.832773 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.833369 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/044b4140-6d50-42d6-893a-2f35ff0bc7b3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.833532 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/044b4140-6d50-42d6-893a-2f35ff0bc7b3-config\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.834345 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/044b4140-6d50-42d6-893a-2f35ff0bc7b3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.841472 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.842353 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.850670 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b4140-6d50-42d6-893a-2f35ff0bc7b3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.853181 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.855544 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrw26\" (UniqueName: \"kubernetes.io/projected/044b4140-6d50-42d6-893a-2f35ff0bc7b3-kube-api-access-xrw26\") pod \"ovsdbserver-nb-0\" (UID: \"044b4140-6d50-42d6-893a-2f35ff0bc7b3\") " pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:55 crc kubenswrapper[5029]: I0313 20:46:55.972574 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 13 20:46:58 crc kubenswrapper[5029]: I0313 20:46:58.826535 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe158656-b08f-4364-832e-f19c0f46d845","Type":"ContainerStarted","Data":"1e32b177d5f030869628a799dfa3fa49a419ab18f9f7d448d10a818e2628f7a9"} Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.207433 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.212512 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.214731 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-kwrsm" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.215752 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.215841 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.216099 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.222224 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.298763 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.298832 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/119f4c09-be62-4769-a9e5-1af49cca26c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.298925 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/119f4c09-be62-4769-a9e5-1af49cca26c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.298956 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.298981 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.299012 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.299045 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119f4c09-be62-4769-a9e5-1af49cca26c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.299074 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ts7\" (UniqueName: \"kubernetes.io/projected/119f4c09-be62-4769-a9e5-1af49cca26c6-kube-api-access-k9ts7\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.400719 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/119f4c09-be62-4769-a9e5-1af49cca26c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.400873 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/119f4c09-be62-4769-a9e5-1af49cca26c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.400913 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.400957 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.401053 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.401086 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119f4c09-be62-4769-a9e5-1af49cca26c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.401126 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ts7\" (UniqueName: \"kubernetes.io/projected/119f4c09-be62-4769-a9e5-1af49cca26c6-kube-api-access-k9ts7\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.401179 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.401533 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/119f4c09-be62-4769-a9e5-1af49cca26c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.401734 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.402084 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/119f4c09-be62-4769-a9e5-1af49cca26c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.402557 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119f4c09-be62-4769-a9e5-1af49cca26c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.407187 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.408573 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.409833 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119f4c09-be62-4769-a9e5-1af49cca26c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.419693 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ts7\" (UniqueName: \"kubernetes.io/projected/119f4c09-be62-4769-a9e5-1af49cca26c6-kube-api-access-k9ts7\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.426451 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"119f4c09-be62-4769-a9e5-1af49cca26c6\") " pod="openstack/ovsdbserver-sb-0" Mar 13 20:46:59 crc kubenswrapper[5029]: I0313 20:46:59.540939 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 13 20:47:06 crc kubenswrapper[5029]: E0313 20:47:06.261422 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 13 20:47:06 crc kubenswrapper[5029]: E0313 20:47:06.262297 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmsvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(016118a1-8825-4373-a487-2fa17c45488a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:47:06 crc kubenswrapper[5029]: E0313 20:47:06.263807 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="016118a1-8825-4373-a487-2fa17c45488a" Mar 13 20:47:06 crc kubenswrapper[5029]: E0313 20:47:06.313543 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 13 20:47:06 crc kubenswrapper[5029]: E0313 20:47:06.313885 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jt9fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(7ff0edef-42cf-4ba2-b170-87cfdd6deefb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:47:06 crc kubenswrapper[5029]: E0313 20:47:06.315023 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" Mar 13 20:47:06 crc kubenswrapper[5029]: I0313 20:47:06.655801 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xvrv7"] Mar 13 20:47:06 crc kubenswrapper[5029]: E0313 20:47:06.891009 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="016118a1-8825-4373-a487-2fa17c45488a" Mar 13 20:47:06 crc kubenswrapper[5029]: E0313 20:47:06.891188 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" Mar 13 20:47:10 crc kubenswrapper[5029]: W0313 20:47:10.463551 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09599f34_8760_4612_9d50_925aeb8134b4.slice/crio-f9c3ff34a74d95e9e212c4aa093e979e928aad1523e2a663ae6bf5617a63574a WatchSource:0}: Error finding container f9c3ff34a74d95e9e212c4aa093e979e928aad1523e2a663ae6bf5617a63574a: Status 404 returned error can't find the container with id f9c3ff34a74d95e9e212c4aa093e979e928aad1523e2a663ae6bf5617a63574a Mar 13 20:47:10 crc kubenswrapper[5029]: I0313 20:47:10.918491 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xvrv7" event={"ID":"09599f34-8760-4612-9d50-925aeb8134b4","Type":"ContainerStarted","Data":"f9c3ff34a74d95e9e212c4aa093e979e928aad1523e2a663ae6bf5617a63574a"} Mar 13 20:47:13 crc kubenswrapper[5029]: I0313 20:47:13.833209 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.791226 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.792071 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65tqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-jj529_openstack(ecb3bccf-4801-4067-be1d-e0c655a754f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.795168 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" podUID="ecb3bccf-4801-4067-be1d-e0c655a754f7" Mar 13 20:47:14 crc kubenswrapper[5029]: W0313 20:47:14.821721 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod044b4140_6d50_42d6_893a_2f35ff0bc7b3.slice/crio-79205a772e3c2d86d9bd9d87eefc1359bd9ffb1b555b97cff2ef26ee893c0473 WatchSource:0}: Error finding container 79205a772e3c2d86d9bd9d87eefc1359bd9ffb1b555b97cff2ef26ee893c0473: Status 404 returned error can't find the container with id 79205a772e3c2d86d9bd9d87eefc1359bd9ffb1b555b97cff2ef26ee893c0473 Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.839076 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.839624 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb4gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-5m8zg_openstack(c106c874-14d7-4801-8e74-4c0a0288a3f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.840921 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" podUID="c106c874-14d7-4801-8e74-4c0a0288a3f0" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.863750 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.863964 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9rlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-z48jh_openstack(d0eda583-786d-49d6-b520-b8b82dbe6f6f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.869017 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" podUID="d0eda583-786d-49d6-b520-b8b82dbe6f6f" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.944607 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.944779 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d97m9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-t85tw_openstack(7fd3b70f-2cb3-40d7-89bb-baa0b20c807b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.946187 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" podUID="7fd3b70f-2cb3-40d7-89bb-baa0b20c807b" Mar 13 20:47:14 crc kubenswrapper[5029]: I0313 20:47:14.992177 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"044b4140-6d50-42d6-893a-2f35ff0bc7b3","Type":"ContainerStarted","Data":"79205a772e3c2d86d9bd9d87eefc1359bd9ffb1b555b97cff2ef26ee893c0473"} Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.994188 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" podUID="c106c874-14d7-4801-8e74-4c0a0288a3f0" Mar 13 20:47:14 crc kubenswrapper[5029]: E0313 20:47:14.994992 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" podUID="d0eda583-786d-49d6-b520-b8b82dbe6f6f" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.317842 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.537895 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.541670 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.555503 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.598107 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-dns-svc\") pod \"ecb3bccf-4801-4067-be1d-e0c655a754f7\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.598699 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-config\") pod \"ecb3bccf-4801-4067-be1d-e0c655a754f7\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.598758 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65tqf\" (UniqueName: \"kubernetes.io/projected/ecb3bccf-4801-4067-be1d-e0c655a754f7-kube-api-access-65tqf\") pod \"ecb3bccf-4801-4067-be1d-e0c655a754f7\" (UID: \"ecb3bccf-4801-4067-be1d-e0c655a754f7\") " Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.598796 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d97m9\" (UniqueName: \"kubernetes.io/projected/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-kube-api-access-d97m9\") pod \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\" (UID: \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\") " Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.598841 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-config\") pod \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\" (UID: \"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b\") " Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.598966 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ecb3bccf-4801-4067-be1d-e0c655a754f7" (UID: "ecb3bccf-4801-4067-be1d-e0c655a754f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.599217 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.599277 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-config" (OuterVolumeSpecName: "config") pod "ecb3bccf-4801-4067-be1d-e0c655a754f7" (UID: "ecb3bccf-4801-4067-be1d-e0c655a754f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.599381 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-config" (OuterVolumeSpecName: "config") pod "7fd3b70f-2cb3-40d7-89bb-baa0b20c807b" (UID: "7fd3b70f-2cb3-40d7-89bb-baa0b20c807b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.602719 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-kube-api-access-d97m9" (OuterVolumeSpecName: "kube-api-access-d97m9") pod "7fd3b70f-2cb3-40d7-89bb-baa0b20c807b" (UID: "7fd3b70f-2cb3-40d7-89bb-baa0b20c807b"). InnerVolumeSpecName "kube-api-access-d97m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.602734 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb3bccf-4801-4067-be1d-e0c655a754f7-kube-api-access-65tqf" (OuterVolumeSpecName: "kube-api-access-65tqf") pod "ecb3bccf-4801-4067-be1d-e0c655a754f7" (UID: "ecb3bccf-4801-4067-be1d-e0c655a754f7"). InnerVolumeSpecName "kube-api-access-65tqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.701085 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb3bccf-4801-4067-be1d-e0c655a754f7-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.701113 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65tqf\" (UniqueName: \"kubernetes.io/projected/ecb3bccf-4801-4067-be1d-e0c655a754f7-kube-api-access-65tqf\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.701125 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d97m9\" (UniqueName: \"kubernetes.io/projected/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-kube-api-access-d97m9\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.701134 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.733385 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:47:15 crc kubenswrapper[5029]: W0313 20:47:15.748138 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab983f1f_460d_45ac_b8e5_7ccf3e5cdfe4.slice/crio-a903f70c0faed9cca0a1414b50cc390bca25ef8ea45e9efd4efdd3b7b7f05d6d WatchSource:0}: Error finding container a903f70c0faed9cca0a1414b50cc390bca25ef8ea45e9efd4efdd3b7b7f05d6d: Status 404 returned error can't find the container with id a903f70c0faed9cca0a1414b50cc390bca25ef8ea45e9efd4efdd3b7b7f05d6d Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.817661 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 20:47:15 crc kubenswrapper[5029]: I0313 20:47:15.926839 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bj9ld"] Mar 13 20:47:15 crc kubenswrapper[5029]: W0313 20:47:15.934183 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4389075_f837_43e3_acc4_b577cdf1f05c.slice/crio-b79a3efa0ae790f98d4b76aacb82fbb2fdf3b4013727003f6dc7894f4d2dce3c WatchSource:0}: Error finding container b79a3efa0ae790f98d4b76aacb82fbb2fdf3b4013727003f6dc7894f4d2dce3c: Status 404 returned error can't find the container with id b79a3efa0ae790f98d4b76aacb82fbb2fdf3b4013727003f6dc7894f4d2dce3c Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.003668 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe158656-b08f-4364-832e-f19c0f46d845","Type":"ContainerStarted","Data":"1b1231e239850ce8304c78f5955708ae5457c60104bac1013d992fb5468a132c"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.009026 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4","Type":"ContainerStarted","Data":"a903f70c0faed9cca0a1414b50cc390bca25ef8ea45e9efd4efdd3b7b7f05d6d"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.010657 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"10b7646b-bd89-43c4-8fa2-2d28c1327c65","Type":"ContainerStarted","Data":"410c0e46f271baae38821f0ecd9b3dbf503966ee4225a27202e892ba2a546316"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.012409 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"119f4c09-be62-4769-a9e5-1af49cca26c6","Type":"ContainerStarted","Data":"d8bda102e108e3270faecb5e9453c3a6ef968977f75f5bb11232bf69703fb278"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.014033 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bj9ld" event={"ID":"c4389075-f837-43e3-acc4-b577cdf1f05c","Type":"ContainerStarted","Data":"b79a3efa0ae790f98d4b76aacb82fbb2fdf3b4013727003f6dc7894f4d2dce3c"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.015533 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"97961996-b234-441c-ba7c-2c479dfae7f4","Type":"ContainerStarted","Data":"11a13849f56964ddf2fe4ecd7f5f31279ae93c6bb197e3aeeb07f829c6800355"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.015571 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"97961996-b234-441c-ba7c-2c479dfae7f4","Type":"ContainerStarted","Data":"e8ca142ff77f114424ba996e9b34cf621cc52fe212cea89fa55870047d2c1fae"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.018020 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.018096 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-jj529" event={"ID":"ecb3bccf-4801-4067-be1d-e0c655a754f7","Type":"ContainerDied","Data":"7b3b70bc890dafa13c416aacdf0cecc653c995a50604689afe15907582a53512"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.019606 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" event={"ID":"7fd3b70f-2cb3-40d7-89bb-baa0b20c807b","Type":"ContainerDied","Data":"71a5443c0bc2aaa04617a58d51f08ef9a0fb7849653792833d0cd1d49d9cbb5e"} Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.019740 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t85tw" Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.133818 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jj529"] Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.142892 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jj529"] Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.161585 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t85tw"] Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.175006 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t85tw"] Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.650608 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd3b70f-2cb3-40d7-89bb-baa0b20c807b" path="/var/lib/kubelet/pods/7fd3b70f-2cb3-40d7-89bb-baa0b20c807b/volumes" Mar 13 20:47:16 crc kubenswrapper[5029]: I0313 20:47:16.656675 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb3bccf-4801-4067-be1d-e0c655a754f7" path="/var/lib/kubelet/pods/ecb3bccf-4801-4067-be1d-e0c655a754f7/volumes" Mar 13 20:47:19 crc kubenswrapper[5029]: I0313 20:47:19.050566 5029 generic.go:334] "Generic (PLEG): container finished" podID="fe158656-b08f-4364-832e-f19c0f46d845" containerID="1b1231e239850ce8304c78f5955708ae5457c60104bac1013d992fb5468a132c" exitCode=0 Mar 13 20:47:19 crc kubenswrapper[5029]: I0313 20:47:19.050641 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe158656-b08f-4364-832e-f19c0f46d845","Type":"ContainerDied","Data":"1b1231e239850ce8304c78f5955708ae5457c60104bac1013d992fb5468a132c"} Mar 13 20:47:20 crc kubenswrapper[5029]: I0313 20:47:20.061619 5029 generic.go:334] "Generic (PLEG): container finished" podID="97961996-b234-441c-ba7c-2c479dfae7f4" containerID="11a13849f56964ddf2fe4ecd7f5f31279ae93c6bb197e3aeeb07f829c6800355" exitCode=0 Mar 13 20:47:20 crc kubenswrapper[5029]: I0313 20:47:20.061745 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"97961996-b234-441c-ba7c-2c479dfae7f4","Type":"ContainerDied","Data":"11a13849f56964ddf2fe4ecd7f5f31279ae93c6bb197e3aeeb07f829c6800355"} Mar 13 20:47:21 crc kubenswrapper[5029]: I0313 20:47:21.072433 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe158656-b08f-4364-832e-f19c0f46d845","Type":"ContainerStarted","Data":"7a5542501869e64684ef683b21131c59078b14f1cca0799b072c007a23166ce6"} Mar 13 20:47:21 crc kubenswrapper[5029]: I0313 20:47:21.104821 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.970881805 podStartE2EDuration="35.104799553s" podCreationTimestamp="2026-03-13 20:46:46 +0000 UTC" firstStartedPulling="2026-03-13 20:46:58.745063278 +0000 UTC m=+1178.761145681" lastFinishedPulling="2026-03-13 20:47:14.878981026 +0000 UTC m=+1194.895063429" observedRunningTime="2026-03-13 20:47:21.095521049 +0000 UTC m=+1201.111603462" watchObservedRunningTime="2026-03-13 20:47:21.104799553 +0000 UTC m=+1201.120881956" Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.080460 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4","Type":"ContainerStarted","Data":"d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa"} Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.082037 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.083389 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"10b7646b-bd89-43c4-8fa2-2d28c1327c65","Type":"ContainerStarted","Data":"549d4e1b5a576e805fcb42d1783f813b19404378148fc5f0ce859e8fad9c52c0"} Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.083596 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.084790 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"119f4c09-be62-4769-a9e5-1af49cca26c6","Type":"ContainerStarted","Data":"02b08f49376cadb9d18b83508f6b5ab2382bdccf06c3e3e8160cf8fe9d7b51b4"} Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.086717 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"97961996-b234-441c-ba7c-2c479dfae7f4","Type":"ContainerStarted","Data":"3bba131510ab21f5407798c681db6c37b4dec3575087ce8ec12e64b3459a2a7b"} Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.087978 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bj9ld" event={"ID":"c4389075-f837-43e3-acc4-b577cdf1f05c","Type":"ContainerStarted","Data":"2db11d528d54379e973f4544761e19ee3e04ae27395f2036016d648a6910a11f"} Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.088993 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xvrv7" event={"ID":"09599f34-8760-4612-9d50-925aeb8134b4","Type":"ContainerStarted","Data":"351de7c7d006ff4c3a86f57979c293ab734c6cd43f0b8cf7841006595f5f950f"} Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.089091 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xvrv7" Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.090522 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"044b4140-6d50-42d6-893a-2f35ff0bc7b3","Type":"ContainerStarted","Data":"3b253be7918b9e9f8e8df100a362d27bb00f4afa14a4ebbacd0f3ffdf720197c"} Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.106267 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.961772611 podStartE2EDuration="30.105476629s" podCreationTimestamp="2026-03-13 20:46:52 +0000 UTC" firstStartedPulling="2026-03-13 20:47:15.751659408 +0000 UTC m=+1195.767741811" lastFinishedPulling="2026-03-13 20:47:20.895363426 +0000 UTC m=+1200.911445829" observedRunningTime="2026-03-13 20:47:22.101947953 +0000 UTC m=+1202.118030356" watchObservedRunningTime="2026-03-13 20:47:22.105476629 +0000 UTC m=+1202.121559032" Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.147584 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.584468878 podStartE2EDuration="33.147568098s" podCreationTimestamp="2026-03-13 20:46:49 +0000 UTC" firstStartedPulling="2026-03-13 20:47:15.352107751 +0000 UTC m=+1195.368190154" lastFinishedPulling="2026-03-13 20:47:19.915206971 +0000 UTC m=+1199.931289374" observedRunningTime="2026-03-13 20:47:22.142368796 +0000 UTC m=+1202.158451209" watchObservedRunningTime="2026-03-13 20:47:22.147568098 +0000 UTC m=+1202.163650501" Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.169442 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xvrv7" podStartSLOduration=17.732172635 podStartE2EDuration="27.169421584s" podCreationTimestamp="2026-03-13 20:46:55 +0000 UTC" firstStartedPulling="2026-03-13 20:47:10.46655588 +0000 UTC m=+1190.482638283" lastFinishedPulling="2026-03-13 20:47:19.903804829 +0000 UTC m=+1199.919887232" observedRunningTime="2026-03-13 20:47:22.164020377 +0000 UTC m=+1202.180102800" watchObservedRunningTime="2026-03-13 20:47:22.169421584 +0000 UTC m=+1202.185503987" Mar 13 20:47:22 crc kubenswrapper[5029]: I0313 20:47:22.194778 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=34.194758036 podStartE2EDuration="34.194758036s" podCreationTimestamp="2026-03-13 20:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:47:22.188318459 +0000 UTC m=+1202.204400882" watchObservedRunningTime="2026-03-13 20:47:22.194758036 +0000 UTC m=+1202.210840439" Mar 13 20:47:23 crc kubenswrapper[5029]: I0313 20:47:23.098580 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"016118a1-8825-4373-a487-2fa17c45488a","Type":"ContainerStarted","Data":"69b8d86fa5c0171e8ea41bc86941b1160f2a6de1cd11c89e37ba71b2ab3e9d1b"} Mar 13 20:47:23 crc kubenswrapper[5029]: I0313 20:47:23.100524 5029 generic.go:334] "Generic (PLEG): container finished" podID="c4389075-f837-43e3-acc4-b577cdf1f05c" containerID="2db11d528d54379e973f4544761e19ee3e04ae27395f2036016d648a6910a11f" exitCode=0 Mar 13 20:47:23 crc kubenswrapper[5029]: I0313 20:47:23.100603 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bj9ld" event={"ID":"c4389075-f837-43e3-acc4-b577cdf1f05c","Type":"ContainerDied","Data":"2db11d528d54379e973f4544761e19ee3e04ae27395f2036016d648a6910a11f"} Mar 13 20:47:23 crc kubenswrapper[5029]: I0313 20:47:23.103248 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ff0edef-42cf-4ba2-b170-87cfdd6deefb","Type":"ContainerStarted","Data":"f101418f370ae7a45ed8ce6c68a911416c7a732eaaa28c1cf01622c29ce93a94"} Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.123383 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bj9ld" event={"ID":"c4389075-f837-43e3-acc4-b577cdf1f05c","Type":"ContainerStarted","Data":"f2da136339f29dbadb822e2d44f977f4e748ccce9643882a7b216a8ac34f9648"} Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.124116 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.124142 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.124158 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bj9ld" event={"ID":"c4389075-f837-43e3-acc4-b577cdf1f05c","Type":"ContainerStarted","Data":"ab1bb8b67dc662919bef163545f522e3549a2825c335be0f3c048e175c09a126"} Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.126566 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"044b4140-6d50-42d6-893a-2f35ff0bc7b3","Type":"ContainerStarted","Data":"8328345f39f4884585ef44ded53606cb1ee1893af6199b0268a45c6be6cfcfb8"} Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.129078 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"119f4c09-be62-4769-a9e5-1af49cca26c6","Type":"ContainerStarted","Data":"ab889292bbf43bef20c3db1b1069e59f9aae3e1dbd0d8b7daf18150ca6a76ef9"} Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.174321 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bj9ld" podStartSLOduration=27.307276336 podStartE2EDuration="31.174303945s" podCreationTimestamp="2026-03-13 20:46:55 +0000 UTC" firstStartedPulling="2026-03-13 20:47:15.940287037 +0000 UTC m=+1195.956369440" lastFinishedPulling="2026-03-13 20:47:19.807314646 +0000 UTC m=+1199.823397049" observedRunningTime="2026-03-13 20:47:26.170212374 +0000 UTC m=+1206.186294797" watchObservedRunningTime="2026-03-13 20:47:26.174303945 +0000 UTC m=+1206.190386338" Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.213495 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.154539892 podStartE2EDuration="28.213481175s" podCreationTimestamp="2026-03-13 20:46:58 +0000 UTC" firstStartedPulling="2026-03-13 20:47:15.826043988 +0000 UTC m=+1195.842126381" lastFinishedPulling="2026-03-13 20:47:24.884985261 +0000 UTC m=+1204.901067664" observedRunningTime="2026-03-13 20:47:26.210519714 +0000 UTC m=+1206.226602117" watchObservedRunningTime="2026-03-13 20:47:26.213481175 +0000 UTC m=+1206.229563578" Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.218335 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.143257977 podStartE2EDuration="32.218322737s" podCreationTimestamp="2026-03-13 20:46:54 +0000 UTC" firstStartedPulling="2026-03-13 20:47:14.826073402 +0000 UTC m=+1194.842155805" lastFinishedPulling="2026-03-13 20:47:24.901138162 +0000 UTC m=+1204.917220565" observedRunningTime="2026-03-13 20:47:26.194103446 +0000 UTC m=+1206.210185859" watchObservedRunningTime="2026-03-13 20:47:26.218322737 +0000 UTC m=+1206.234405140" Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.542479 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 13 20:47:26 crc kubenswrapper[5029]: I0313 20:47:26.577744 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.137117 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.174989 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.435116 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z48jh"] Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.478794 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-whxvg"] Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.480358 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.486543 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-whxvg"] Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.500225 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.630058 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fzdcm"] Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.630489 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.630638 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rdd\" (UniqueName: \"kubernetes.io/projected/176c30d4-7bbd-42ac-a5bf-87b018e669e3-kube-api-access-22rdd\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.630691 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.630735 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-config\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.631479 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.634143 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.638960 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fzdcm"] Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.733368 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-config\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.733698 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-combined-ca-bundle\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.733733 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.733764 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-config\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.733788 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxl8v\" (UniqueName: \"kubernetes.io/projected/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-kube-api-access-wxl8v\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.733940 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.733991 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rdd\" (UniqueName: \"kubernetes.io/projected/176c30d4-7bbd-42ac-a5bf-87b018e669e3-kube-api-access-22rdd\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.734013 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-ovs-rundir\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.734064 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-ovn-rundir\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.734087 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.735340 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.735793 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-config\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.736095 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.757256 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rdd\" (UniqueName: \"kubernetes.io/projected/176c30d4-7bbd-42ac-a5bf-87b018e669e3-kube-api-access-22rdd\") pod \"dnsmasq-dns-7f896c8c65-whxvg\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.802732 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.835299 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.836373 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-combined-ca-bundle\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.836457 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-config\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.836507 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxl8v\" (UniqueName: \"kubernetes.io/projected/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-kube-api-access-wxl8v\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.836671 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.836758 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-ovs-rundir\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.836818 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-ovn-rundir\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.837227 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-ovn-rundir\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.838079 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-ovs-rundir\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.838879 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-config\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.841651 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-combined-ca-bundle\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.842798 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.857036 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxl8v\" (UniqueName: \"kubernetes.io/projected/0a248b29-b82a-41d1-aaa3-e7d12210ae6c-kube-api-access-wxl8v\") pod \"ovn-controller-metrics-fzdcm\" (UID: \"0a248b29-b82a-41d1-aaa3-e7d12210ae6c\") " pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.916178 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-5m8zg"] Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.940039 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9rlb\" (UniqueName: \"kubernetes.io/projected/d0eda583-786d-49d6-b520-b8b82dbe6f6f-kube-api-access-s9rlb\") pod \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.940139 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-config\") pod \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.940210 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-dns-svc\") pod \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\" (UID: \"d0eda583-786d-49d6-b520-b8b82dbe6f6f\") " Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.944083 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0eda583-786d-49d6-b520-b8b82dbe6f6f" (UID: "d0eda583-786d-49d6-b520-b8b82dbe6f6f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.944729 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-config" (OuterVolumeSpecName: "config") pod "d0eda583-786d-49d6-b520-b8b82dbe6f6f" (UID: "d0eda583-786d-49d6-b520-b8b82dbe6f6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.948916 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wcgc6"] Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.952083 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.953488 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0eda583-786d-49d6-b520-b8b82dbe6f6f-kube-api-access-s9rlb" (OuterVolumeSpecName: "kube-api-access-s9rlb") pod "d0eda583-786d-49d6-b520-b8b82dbe6f6f" (UID: "d0eda583-786d-49d6-b520-b8b82dbe6f6f"). InnerVolumeSpecName "kube-api-access-s9rlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.959320 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.964345 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wcgc6"] Mar 13 20:47:27 crc kubenswrapper[5029]: I0313 20:47:27.965446 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fzdcm" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.050981 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fctwk\" (UniqueName: \"kubernetes.io/projected/70c7e4de-b839-4da7-91a8-474a4a5fd16f-kube-api-access-fctwk\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.051329 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.052021 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.052054 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.052165 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-config\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.052474 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9rlb\" (UniqueName: \"kubernetes.io/projected/d0eda583-786d-49d6-b520-b8b82dbe6f6f-kube-api-access-s9rlb\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.052494 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.052507 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0eda583-786d-49d6-b520-b8b82dbe6f6f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.144699 5029 generic.go:334] "Generic (PLEG): container finished" podID="c106c874-14d7-4801-8e74-4c0a0288a3f0" containerID="c45278e694c031adf65564486b6113e42331fc70c228789e0948ce175e4648fb" exitCode=0 Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.144823 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" event={"ID":"c106c874-14d7-4801-8e74-4c0a0288a3f0","Type":"ContainerDied","Data":"c45278e694c031adf65564486b6113e42331fc70c228789e0948ce175e4648fb"} Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.153634 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.154768 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z48jh" event={"ID":"d0eda583-786d-49d6-b520-b8b82dbe6f6f","Type":"ContainerDied","Data":"4d32faa27b515e6cdbe0e3094c83ba0723a01a5adc7b09686471fee0edf157ea"} Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.160722 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.158843 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.160814 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.161497 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.161638 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-config\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.167142 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-config\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.167377 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fctwk\" (UniqueName: \"kubernetes.io/projected/70c7e4de-b839-4da7-91a8-474a4a5fd16f-kube-api-access-fctwk\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.167702 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.168622 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.197667 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fctwk\" (UniqueName: \"kubernetes.io/projected/70c7e4de-b839-4da7-91a8-474a4a5fd16f-kube-api-access-fctwk\") pod \"dnsmasq-dns-86db49b7ff-wcgc6\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.262269 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z48jh"] Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.271777 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z48jh"] Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.273042 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.380220 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-whxvg"] Mar 13 20:47:28 crc kubenswrapper[5029]: W0313 20:47:28.415801 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod176c30d4_7bbd_42ac_a5bf_87b018e669e3.slice/crio-b6f6bb1cd1df011c2aa1246e6a767f33bd94eee5a40d030233160c75f3df9f74 WatchSource:0}: Error finding container b6f6bb1cd1df011c2aa1246e6a767f33bd94eee5a40d030233160c75f3df9f74: Status 404 returned error can't find the container with id b6f6bb1cd1df011c2aa1246e6a767f33bd94eee5a40d030233160c75f3df9f74 Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.432580 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.432649 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.482087 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.525540 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fzdcm"] Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.556547 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.582819 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-dns-svc\") pod \"c106c874-14d7-4801-8e74-4c0a0288a3f0\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.582971 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb4gt\" (UniqueName: \"kubernetes.io/projected/c106c874-14d7-4801-8e74-4c0a0288a3f0-kube-api-access-kb4gt\") pod \"c106c874-14d7-4801-8e74-4c0a0288a3f0\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.583338 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-config\") pod \"c106c874-14d7-4801-8e74-4c0a0288a3f0\" (UID: \"c106c874-14d7-4801-8e74-4c0a0288a3f0\") " Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.589563 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c106c874-14d7-4801-8e74-4c0a0288a3f0-kube-api-access-kb4gt" (OuterVolumeSpecName: "kube-api-access-kb4gt") pod "c106c874-14d7-4801-8e74-4c0a0288a3f0" (UID: "c106c874-14d7-4801-8e74-4c0a0288a3f0"). InnerVolumeSpecName "kube-api-access-kb4gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.604906 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c106c874-14d7-4801-8e74-4c0a0288a3f0" (UID: "c106c874-14d7-4801-8e74-4c0a0288a3f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.609054 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-config" (OuterVolumeSpecName: "config") pod "c106c874-14d7-4801-8e74-4c0a0288a3f0" (UID: "c106c874-14d7-4801-8e74-4c0a0288a3f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.615476 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0eda583-786d-49d6-b520-b8b82dbe6f6f" path="/var/lib/kubelet/pods/d0eda583-786d-49d6-b520-b8b82dbe6f6f/volumes" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.685426 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.685466 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c106c874-14d7-4801-8e74-4c0a0288a3f0-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.685482 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb4gt\" (UniqueName: \"kubernetes.io/projected/c106c874-14d7-4801-8e74-4c0a0288a3f0-kube-api-access-kb4gt\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.831121 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wcgc6"] Mar 13 20:47:28 crc kubenswrapper[5029]: W0313 20:47:28.841302 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70c7e4de_b839_4da7_91a8_474a4a5fd16f.slice/crio-bfa377bebbd63e289364d2e83486511c23f52f050f4aa8008e7343558369243f WatchSource:0}: Error finding container bfa377bebbd63e289364d2e83486511c23f52f050f4aa8008e7343558369243f: Status 404 returned error can't find the container with id bfa377bebbd63e289364d2e83486511c23f52f050f4aa8008e7343558369243f Mar 13 20:47:28 crc kubenswrapper[5029]: I0313 20:47:28.973174 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.018395 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.157775 5029 generic.go:334] "Generic (PLEG): container finished" podID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerID="9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e" exitCode=0 Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.157833 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" event={"ID":"70c7e4de-b839-4da7-91a8-474a4a5fd16f","Type":"ContainerDied","Data":"9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e"} Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.158185 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" event={"ID":"70c7e4de-b839-4da7-91a8-474a4a5fd16f","Type":"ContainerStarted","Data":"bfa377bebbd63e289364d2e83486511c23f52f050f4aa8008e7343558369243f"} Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.159456 5029 generic.go:334] "Generic (PLEG): container finished" podID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" containerID="85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e" exitCode=0 Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.159519 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" event={"ID":"176c30d4-7bbd-42ac-a5bf-87b018e669e3","Type":"ContainerDied","Data":"85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e"} Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.159557 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" event={"ID":"176c30d4-7bbd-42ac-a5bf-87b018e669e3","Type":"ContainerStarted","Data":"b6f6bb1cd1df011c2aa1246e6a767f33bd94eee5a40d030233160c75f3df9f74"} Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.162838 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" event={"ID":"c106c874-14d7-4801-8e74-4c0a0288a3f0","Type":"ContainerDied","Data":"e64e196a555d513618321335dc9543114b136415957ce94d23fc8f8dde694010"} Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.162909 5029 scope.go:117] "RemoveContainer" containerID="c45278e694c031adf65564486b6113e42331fc70c228789e0948ce175e4648fb" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.163053 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-5m8zg" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.170153 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fzdcm" event={"ID":"0a248b29-b82a-41d1-aaa3-e7d12210ae6c","Type":"ContainerStarted","Data":"42e86c61cb8ca84449e967b6038c845b47f6a56b6a5427042bd50ad56c234488"} Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.170191 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fzdcm" event={"ID":"0a248b29-b82a-41d1-aaa3-e7d12210ae6c","Type":"ContainerStarted","Data":"88561cb0bd5c8495d3f851382028c4eb77aea0ccc33d6dab04049c10010ef48c"} Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.170631 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.210369 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fzdcm" podStartSLOduration=2.210342 podStartE2EDuration="2.210342s" podCreationTimestamp="2026-03-13 20:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:47:29.203265628 +0000 UTC m=+1209.219348041" watchObservedRunningTime="2026-03-13 20:47:29.210342 +0000 UTC m=+1209.226424403" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.223176 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.327622 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.404306 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-5m8zg"] Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.412967 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-5m8zg"] Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.569017 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 13 20:47:29 crc kubenswrapper[5029]: E0313 20:47:29.569378 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c106c874-14d7-4801-8e74-4c0a0288a3f0" containerName="init" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.569391 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c106c874-14d7-4801-8e74-4c0a0288a3f0" containerName="init" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.615747 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c106c874-14d7-4801-8e74-4c0a0288a3f0" containerName="init" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.617564 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.617727 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.624515 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-j4f8l" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.624755 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.624939 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.625106 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.715172 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5fb5\" (UniqueName: \"kubernetes.io/projected/a6777edf-388f-48a7-92aa-eff24b6b2bfd-kube-api-access-b5fb5\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.715224 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.715253 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.715300 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.715323 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6777edf-388f-48a7-92aa-eff24b6b2bfd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.715342 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6777edf-388f-48a7-92aa-eff24b6b2bfd-scripts\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.715364 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6777edf-388f-48a7-92aa-eff24b6b2bfd-config\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.783287 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.783547 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.816942 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5fb5\" (UniqueName: \"kubernetes.io/projected/a6777edf-388f-48a7-92aa-eff24b6b2bfd-kube-api-access-b5fb5\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.817000 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.817028 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.817102 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.817132 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6777edf-388f-48a7-92aa-eff24b6b2bfd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.817161 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6777edf-388f-48a7-92aa-eff24b6b2bfd-scripts\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.817191 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6777edf-388f-48a7-92aa-eff24b6b2bfd-config\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.818117 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6777edf-388f-48a7-92aa-eff24b6b2bfd-config\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.818126 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6777edf-388f-48a7-92aa-eff24b6b2bfd-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.818408 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6777edf-388f-48a7-92aa-eff24b6b2bfd-scripts\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.820863 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.821287 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.821726 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6777edf-388f-48a7-92aa-eff24b6b2bfd-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.836408 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5fb5\" (UniqueName: \"kubernetes.io/projected/a6777edf-388f-48a7-92aa-eff24b6b2bfd-kube-api-access-b5fb5\") pod \"ovn-northd-0\" (UID: \"a6777edf-388f-48a7-92aa-eff24b6b2bfd\") " pod="openstack/ovn-northd-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.837152 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.864490 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 13 20:47:29 crc kubenswrapper[5029]: I0313 20:47:29.947579 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.136905 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a338-account-create-update-bfp8h"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.138219 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.143069 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.149025 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-wwvxt"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.150278 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.157920 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wwvxt"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.165381 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a338-account-create-update-bfp8h"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.197567 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" event={"ID":"70c7e4de-b839-4da7-91a8-474a4a5fd16f","Type":"ContainerStarted","Data":"aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303"} Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.198079 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.225538 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb427263-6866-4a0a-ab33-e69f6890b52a-operator-scripts\") pod \"glance-a338-account-create-update-bfp8h\" (UID: \"fb427263-6866-4a0a-ab33-e69f6890b52a\") " pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.225599 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jbz\" (UniqueName: \"kubernetes.io/projected/55bdc521-fd20-4ff3-8561-715dd41e604f-kube-api-access-45jbz\") pod \"glance-db-create-wwvxt\" (UID: \"55bdc521-fd20-4ff3-8561-715dd41e604f\") " pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.225691 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bdc521-fd20-4ff3-8561-715dd41e604f-operator-scripts\") pod \"glance-db-create-wwvxt\" (UID: \"55bdc521-fd20-4ff3-8561-715dd41e604f\") " pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.225802 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s7qt\" (UniqueName: \"kubernetes.io/projected/fb427263-6866-4a0a-ab33-e69f6890b52a-kube-api-access-5s7qt\") pod \"glance-a338-account-create-update-bfp8h\" (UID: \"fb427263-6866-4a0a-ab33-e69f6890b52a\") " pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.230160 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" event={"ID":"176c30d4-7bbd-42ac-a5bf-87b018e669e3","Type":"ContainerStarted","Data":"4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6"} Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.230973 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.270366 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" podStartSLOduration=3.270342485 podStartE2EDuration="3.270342485s" podCreationTimestamp="2026-03-13 20:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:47:30.23862488 +0000 UTC m=+1210.254707283" watchObservedRunningTime="2026-03-13 20:47:30.270342485 +0000 UTC m=+1210.286424888" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.274797 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" podStartSLOduration=3.274771086 podStartE2EDuration="3.274771086s" podCreationTimestamp="2026-03-13 20:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:47:30.268372152 +0000 UTC m=+1210.284454555" watchObservedRunningTime="2026-03-13 20:47:30.274771086 +0000 UTC m=+1210.290853489" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.326865 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s7qt\" (UniqueName: \"kubernetes.io/projected/fb427263-6866-4a0a-ab33-e69f6890b52a-kube-api-access-5s7qt\") pod \"glance-a338-account-create-update-bfp8h\" (UID: \"fb427263-6866-4a0a-ab33-e69f6890b52a\") " pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.326945 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb427263-6866-4a0a-ab33-e69f6890b52a-operator-scripts\") pod \"glance-a338-account-create-update-bfp8h\" (UID: \"fb427263-6866-4a0a-ab33-e69f6890b52a\") " pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.326975 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jbz\" (UniqueName: \"kubernetes.io/projected/55bdc521-fd20-4ff3-8561-715dd41e604f-kube-api-access-45jbz\") pod \"glance-db-create-wwvxt\" (UID: \"55bdc521-fd20-4ff3-8561-715dd41e604f\") " pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.327020 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bdc521-fd20-4ff3-8561-715dd41e604f-operator-scripts\") pod \"glance-db-create-wwvxt\" (UID: \"55bdc521-fd20-4ff3-8561-715dd41e604f\") " pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.327919 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bdc521-fd20-4ff3-8561-715dd41e604f-operator-scripts\") pod \"glance-db-create-wwvxt\" (UID: \"55bdc521-fd20-4ff3-8561-715dd41e604f\") " pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.328530 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb427263-6866-4a0a-ab33-e69f6890b52a-operator-scripts\") pod \"glance-a338-account-create-update-bfp8h\" (UID: \"fb427263-6866-4a0a-ab33-e69f6890b52a\") " pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.352232 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s7qt\" (UniqueName: \"kubernetes.io/projected/fb427263-6866-4a0a-ab33-e69f6890b52a-kube-api-access-5s7qt\") pod \"glance-a338-account-create-update-bfp8h\" (UID: \"fb427263-6866-4a0a-ab33-e69f6890b52a\") " pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.352413 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jbz\" (UniqueName: \"kubernetes.io/projected/55bdc521-fd20-4ff3-8561-715dd41e604f-kube-api-access-45jbz\") pod \"glance-db-create-wwvxt\" (UID: \"55bdc521-fd20-4ff3-8561-715dd41e604f\") " pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.353576 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.447044 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.467591 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.488394 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.620725 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c106c874-14d7-4801-8e74-4c0a0288a3f0" path="/var/lib/kubelet/pods/c106c874-14d7-4801-8e74-4c0a0288a3f0/volumes" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.841739 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rgf5k"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.843827 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.853475 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rgf5k"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.886243 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wwvxt"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.936379 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a338-account-create-update-bfp8h"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.938132 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mcf\" (UniqueName: \"kubernetes.io/projected/205e0049-29c0-4ebc-8cb3-670e58c8af28-kube-api-access-57mcf\") pod \"keystone-db-create-rgf5k\" (UID: \"205e0049-29c0-4ebc-8cb3-670e58c8af28\") " pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.938250 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e0049-29c0-4ebc-8cb3-670e58c8af28-operator-scripts\") pod \"keystone-db-create-rgf5k\" (UID: \"205e0049-29c0-4ebc-8cb3-670e58c8af28\") " pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:30 crc kubenswrapper[5029]: W0313 20:47:30.938740 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb427263_6866_4a0a_ab33_e69f6890b52a.slice/crio-0a7cb4f7b7adffef853aee65a8a53458af4f8cdc4d6489de3c89791bf93e0d03 WatchSource:0}: Error finding container 0a7cb4f7b7adffef853aee65a8a53458af4f8cdc4d6489de3c89791bf93e0d03: Status 404 returned error can't find the container with id 0a7cb4f7b7adffef853aee65a8a53458af4f8cdc4d6489de3c89791bf93e0d03 Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.953366 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1a94-account-create-update-phrtc"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.954767 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.961882 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1a94-account-create-update-phrtc"] Mar 13 20:47:30 crc kubenswrapper[5029]: I0313 20:47:30.963312 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.039837 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e0049-29c0-4ebc-8cb3-670e58c8af28-operator-scripts\") pod \"keystone-db-create-rgf5k\" (UID: \"205e0049-29c0-4ebc-8cb3-670e58c8af28\") " pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.039947 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ljcl\" (UniqueName: \"kubernetes.io/projected/d6a13c94-9043-44b0-a90a-0f6b60863453-kube-api-access-4ljcl\") pod \"keystone-1a94-account-create-update-phrtc\" (UID: \"d6a13c94-9043-44b0-a90a-0f6b60863453\") " pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.039978 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a13c94-9043-44b0-a90a-0f6b60863453-operator-scripts\") pod \"keystone-1a94-account-create-update-phrtc\" (UID: \"d6a13c94-9043-44b0-a90a-0f6b60863453\") " pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.040059 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mcf\" (UniqueName: \"kubernetes.io/projected/205e0049-29c0-4ebc-8cb3-670e58c8af28-kube-api-access-57mcf\") pod \"keystone-db-create-rgf5k\" (UID: \"205e0049-29c0-4ebc-8cb3-670e58c8af28\") " pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.040556 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pswbz"] Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.041564 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e0049-29c0-4ebc-8cb3-670e58c8af28-operator-scripts\") pod \"keystone-db-create-rgf5k\" (UID: \"205e0049-29c0-4ebc-8cb3-670e58c8af28\") " pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.041618 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pswbz" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.048664 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pswbz"] Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.067968 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mcf\" (UniqueName: \"kubernetes.io/projected/205e0049-29c0-4ebc-8cb3-670e58c8af28-kube-api-access-57mcf\") pod \"keystone-db-create-rgf5k\" (UID: \"205e0049-29c0-4ebc-8cb3-670e58c8af28\") " pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.146617 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e5e0db0-ec13-4d33-9c31-311982a5d598-operator-scripts\") pod \"placement-db-create-pswbz\" (UID: \"1e5e0db0-ec13-4d33-9c31-311982a5d598\") " pod="openstack/placement-db-create-pswbz" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.146679 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv95q\" (UniqueName: \"kubernetes.io/projected/1e5e0db0-ec13-4d33-9c31-311982a5d598-kube-api-access-cv95q\") pod \"placement-db-create-pswbz\" (UID: \"1e5e0db0-ec13-4d33-9c31-311982a5d598\") " pod="openstack/placement-db-create-pswbz" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.146713 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ljcl\" (UniqueName: \"kubernetes.io/projected/d6a13c94-9043-44b0-a90a-0f6b60863453-kube-api-access-4ljcl\") pod \"keystone-1a94-account-create-update-phrtc\" (UID: \"d6a13c94-9043-44b0-a90a-0f6b60863453\") " pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.146732 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a13c94-9043-44b0-a90a-0f6b60863453-operator-scripts\") pod \"keystone-1a94-account-create-update-phrtc\" (UID: \"d6a13c94-9043-44b0-a90a-0f6b60863453\") " pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.147343 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a13c94-9043-44b0-a90a-0f6b60863453-operator-scripts\") pod \"keystone-1a94-account-create-update-phrtc\" (UID: \"d6a13c94-9043-44b0-a90a-0f6b60863453\") " pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.170354 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.170796 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ljcl\" (UniqueName: \"kubernetes.io/projected/d6a13c94-9043-44b0-a90a-0f6b60863453-kube-api-access-4ljcl\") pod \"keystone-1a94-account-create-update-phrtc\" (UID: \"d6a13c94-9043-44b0-a90a-0f6b60863453\") " pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.248891 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e5e0db0-ec13-4d33-9c31-311982a5d598-operator-scripts\") pod \"placement-db-create-pswbz\" (UID: \"1e5e0db0-ec13-4d33-9c31-311982a5d598\") " pod="openstack/placement-db-create-pswbz" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.249002 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv95q\" (UniqueName: \"kubernetes.io/projected/1e5e0db0-ec13-4d33-9c31-311982a5d598-kube-api-access-cv95q\") pod \"placement-db-create-pswbz\" (UID: \"1e5e0db0-ec13-4d33-9c31-311982a5d598\") " pod="openstack/placement-db-create-pswbz" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.249951 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c8fe-account-create-update-srjsm"] Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.250295 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e5e0db0-ec13-4d33-9c31-311982a5d598-operator-scripts\") pod \"placement-db-create-pswbz\" (UID: \"1e5e0db0-ec13-4d33-9c31-311982a5d598\") " pod="openstack/placement-db-create-pswbz" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.251173 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.254264 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.265837 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wwvxt" event={"ID":"55bdc521-fd20-4ff3-8561-715dd41e604f","Type":"ContainerStarted","Data":"c94405627ad1a0d0ab5d8cc6cf9533c40f3fcbb38dbe3239d2e6e0553c302c7a"} Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.265895 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wwvxt" event={"ID":"55bdc521-fd20-4ff3-8561-715dd41e604f","Type":"ContainerStarted","Data":"37b905f2e86f9e1b0f7d4b197cbc003894e8321d04acd9f88f11c261124e7526"} Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.269633 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a338-account-create-update-bfp8h" event={"ID":"fb427263-6866-4a0a-ab33-e69f6890b52a","Type":"ContainerStarted","Data":"89b9d55cce5567356ede20aae3526de586639b64acbb27fa74f8c5bc4ccf3f6e"} Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.269681 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a338-account-create-update-bfp8h" event={"ID":"fb427263-6866-4a0a-ab33-e69f6890b52a","Type":"ContainerStarted","Data":"0a7cb4f7b7adffef853aee65a8a53458af4f8cdc4d6489de3c89791bf93e0d03"} Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.270532 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv95q\" (UniqueName: \"kubernetes.io/projected/1e5e0db0-ec13-4d33-9c31-311982a5d598-kube-api-access-cv95q\") pod \"placement-db-create-pswbz\" (UID: \"1e5e0db0-ec13-4d33-9c31-311982a5d598\") " pod="openstack/placement-db-create-pswbz" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.276893 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a6777edf-388f-48a7-92aa-eff24b6b2bfd","Type":"ContainerStarted","Data":"89d2f7935876a03d1ca4aa1ab286ee770bb769f7c26b165bded210c52a43ab88"} Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.283143 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8fe-account-create-update-srjsm"] Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.294272 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.315395 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a338-account-create-update-bfp8h" podStartSLOduration=1.315371502 podStartE2EDuration="1.315371502s" podCreationTimestamp="2026-03-13 20:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:47:31.290424861 +0000 UTC m=+1211.306507324" watchObservedRunningTime="2026-03-13 20:47:31.315371502 +0000 UTC m=+1211.331453905" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.335123 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-wwvxt" podStartSLOduration=1.335101701 podStartE2EDuration="1.335101701s" podCreationTimestamp="2026-03-13 20:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:47:31.319010741 +0000 UTC m=+1211.335093144" watchObservedRunningTime="2026-03-13 20:47:31.335101701 +0000 UTC m=+1211.351184104" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.352774 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e35749-84ef-4c66-ba93-835828ffcbda-operator-scripts\") pod \"placement-c8fe-account-create-update-srjsm\" (UID: \"81e35749-84ef-4c66-ba93-835828ffcbda\") " pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.353286 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhf7f\" (UniqueName: \"kubernetes.io/projected/81e35749-84ef-4c66-ba93-835828ffcbda-kube-api-access-bhf7f\") pod \"placement-c8fe-account-create-update-srjsm\" (UID: \"81e35749-84ef-4c66-ba93-835828ffcbda\") " pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.457131 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhf7f\" (UniqueName: \"kubernetes.io/projected/81e35749-84ef-4c66-ba93-835828ffcbda-kube-api-access-bhf7f\") pod \"placement-c8fe-account-create-update-srjsm\" (UID: \"81e35749-84ef-4c66-ba93-835828ffcbda\") " pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.458203 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pswbz" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.459012 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e35749-84ef-4c66-ba93-835828ffcbda-operator-scripts\") pod \"placement-c8fe-account-create-update-srjsm\" (UID: \"81e35749-84ef-4c66-ba93-835828ffcbda\") " pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.460220 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e35749-84ef-4c66-ba93-835828ffcbda-operator-scripts\") pod \"placement-c8fe-account-create-update-srjsm\" (UID: \"81e35749-84ef-4c66-ba93-835828ffcbda\") " pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.484312 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhf7f\" (UniqueName: \"kubernetes.io/projected/81e35749-84ef-4c66-ba93-835828ffcbda-kube-api-access-bhf7f\") pod \"placement-c8fe-account-create-update-srjsm\" (UID: \"81e35749-84ef-4c66-ba93-835828ffcbda\") " pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.619240 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.736817 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rgf5k"] Mar 13 20:47:31 crc kubenswrapper[5029]: W0313 20:47:31.819904 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod205e0049_29c0_4ebc_8cb3_670e58c8af28.slice/crio-63b5510e0ae82607a8b9458b76f7baa5a9b6c48d7daca050ee0c7252240c21ac WatchSource:0}: Error finding container 63b5510e0ae82607a8b9458b76f7baa5a9b6c48d7daca050ee0c7252240c21ac: Status 404 returned error can't find the container with id 63b5510e0ae82607a8b9458b76f7baa5a9b6c48d7daca050ee0c7252240c21ac Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.873412 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1a94-account-create-update-phrtc"] Mar 13 20:47:31 crc kubenswrapper[5029]: I0313 20:47:31.992293 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pswbz"] Mar 13 20:47:31 crc kubenswrapper[5029]: W0313 20:47:31.995314 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e5e0db0_ec13_4d33_9c31_311982a5d598.slice/crio-a077fe290beeb04d7009479842b35e881888264492ad073c5c062d78b2398aa8 WatchSource:0}: Error finding container a077fe290beeb04d7009479842b35e881888264492ad073c5c062d78b2398aa8: Status 404 returned error can't find the container with id a077fe290beeb04d7009479842b35e881888264492ad073c5c062d78b2398aa8 Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.288441 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1a94-account-create-update-phrtc" event={"ID":"d6a13c94-9043-44b0-a90a-0f6b60863453","Type":"ContainerStarted","Data":"5b8a00803966d5592155880091dc5bf9384167c3ab151e503b4ccee70193f14e"} Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.290401 5029 generic.go:334] "Generic (PLEG): container finished" podID="55bdc521-fd20-4ff3-8561-715dd41e604f" containerID="c94405627ad1a0d0ab5d8cc6cf9533c40f3fcbb38dbe3239d2e6e0553c302c7a" exitCode=0 Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.290645 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wwvxt" event={"ID":"55bdc521-fd20-4ff3-8561-715dd41e604f","Type":"ContainerDied","Data":"c94405627ad1a0d0ab5d8cc6cf9533c40f3fcbb38dbe3239d2e6e0553c302c7a"} Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.290710 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8fe-account-create-update-srjsm"] Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.298480 5029 generic.go:334] "Generic (PLEG): container finished" podID="fb427263-6866-4a0a-ab33-e69f6890b52a" containerID="89b9d55cce5567356ede20aae3526de586639b64acbb27fa74f8c5bc4ccf3f6e" exitCode=0 Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.298536 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a338-account-create-update-bfp8h" event={"ID":"fb427263-6866-4a0a-ab33-e69f6890b52a","Type":"ContainerDied","Data":"89b9d55cce5567356ede20aae3526de586639b64acbb27fa74f8c5bc4ccf3f6e"} Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.310599 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pswbz" event={"ID":"1e5e0db0-ec13-4d33-9c31-311982a5d598","Type":"ContainerStarted","Data":"a077fe290beeb04d7009479842b35e881888264492ad073c5c062d78b2398aa8"} Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.334192 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rgf5k" event={"ID":"205e0049-29c0-4ebc-8cb3-670e58c8af28","Type":"ContainerStarted","Data":"941abca712d6cc10ba4fd44442e3d4a2baae7fd375af7dfdf49a9d79ff55e945"} Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.334252 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rgf5k" event={"ID":"205e0049-29c0-4ebc-8cb3-670e58c8af28","Type":"ContainerStarted","Data":"63b5510e0ae82607a8b9458b76f7baa5a9b6c48d7daca050ee0c7252240c21ac"} Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.382822 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-rgf5k" podStartSLOduration=2.38280505 podStartE2EDuration="2.38280505s" podCreationTimestamp="2026-03-13 20:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:47:32.370773491 +0000 UTC m=+1212.386855894" watchObservedRunningTime="2026-03-13 20:47:32.38280505 +0000 UTC m=+1212.398887453" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.525558 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-whxvg"] Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.558197 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-cbld8"] Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.561889 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.580369 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cbld8"] Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.584404 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-dns-svc\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.584456 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g65h5\" (UniqueName: \"kubernetes.io/projected/1b871cf9-26fb-481d-8404-9c767e53937c-kube-api-access-g65h5\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.584521 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.584682 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.584807 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-config\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.687020 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-dns-svc\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.687357 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g65h5\" (UniqueName: \"kubernetes.io/projected/1b871cf9-26fb-481d-8404-9c767e53937c-kube-api-access-g65h5\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.687394 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.687440 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.687499 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-config\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.688304 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-config\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.688891 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-dns-svc\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.689636 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.694502 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.713282 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g65h5\" (UniqueName: \"kubernetes.io/projected/1b871cf9-26fb-481d-8404-9c767e53937c-kube-api-access-g65h5\") pod \"dnsmasq-dns-698758b865-cbld8\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.755057 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 13 20:47:32 crc kubenswrapper[5029]: I0313 20:47:32.898999 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.348281 5029 generic.go:334] "Generic (PLEG): container finished" podID="81e35749-84ef-4c66-ba93-835828ffcbda" containerID="b3f3f36afde5bad99e1d0336c5a5a61b68adb19e8da6edbf7bc9b5768f51cd18" exitCode=0 Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.348359 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8fe-account-create-update-srjsm" event={"ID":"81e35749-84ef-4c66-ba93-835828ffcbda","Type":"ContainerDied","Data":"b3f3f36afde5bad99e1d0336c5a5a61b68adb19e8da6edbf7bc9b5768f51cd18"} Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.348386 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8fe-account-create-update-srjsm" event={"ID":"81e35749-84ef-4c66-ba93-835828ffcbda","Type":"ContainerStarted","Data":"8d21a6c67027d12b41a69ac85aee10c3902ba3cf46105fe086ce9be8ab14c34a"} Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.350554 5029 generic.go:334] "Generic (PLEG): container finished" podID="1e5e0db0-ec13-4d33-9c31-311982a5d598" containerID="932bc1a58c685d23a716f969afd784d9f6ab1d3ec96fb391cd6d1c679f9c869f" exitCode=0 Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.350699 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pswbz" event={"ID":"1e5e0db0-ec13-4d33-9c31-311982a5d598","Type":"ContainerDied","Data":"932bc1a58c685d23a716f969afd784d9f6ab1d3ec96fb391cd6d1c679f9c869f"} Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.352873 5029 generic.go:334] "Generic (PLEG): container finished" podID="205e0049-29c0-4ebc-8cb3-670e58c8af28" containerID="941abca712d6cc10ba4fd44442e3d4a2baae7fd375af7dfdf49a9d79ff55e945" exitCode=0 Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.353054 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rgf5k" event={"ID":"205e0049-29c0-4ebc-8cb3-670e58c8af28","Type":"ContainerDied","Data":"941abca712d6cc10ba4fd44442e3d4a2baae7fd375af7dfdf49a9d79ff55e945"} Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.355951 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a6777edf-388f-48a7-92aa-eff24b6b2bfd","Type":"ContainerStarted","Data":"52dc7da469331411bda904d68f09b03acfcbb0b7baf18a252f8c51b0416ec410"} Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.355993 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a6777edf-388f-48a7-92aa-eff24b6b2bfd","Type":"ContainerStarted","Data":"3f351736865e1f405b1f2a0f9a103774155f727092b9140869233bc05198a980"} Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.356161 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.357767 5029 generic.go:334] "Generic (PLEG): container finished" podID="d6a13c94-9043-44b0-a90a-0f6b60863453" containerID="fbc80d437010d0e8e16130343bf792279e595d1ac96f24d57d23a3e5ca38dd7d" exitCode=0 Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.358030 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1a94-account-create-update-phrtc" event={"ID":"d6a13c94-9043-44b0-a90a-0f6b60863453","Type":"ContainerDied","Data":"fbc80d437010d0e8e16130343bf792279e595d1ac96f24d57d23a3e5ca38dd7d"} Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.358361 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" podUID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" containerName="dnsmasq-dns" containerID="cri-o://4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6" gracePeriod=10 Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.415722 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cbld8"] Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.427102 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.763995167 podStartE2EDuration="4.427072265s" podCreationTimestamp="2026-03-13 20:47:29 +0000 UTC" firstStartedPulling="2026-03-13 20:47:30.362378217 +0000 UTC m=+1210.378460620" lastFinishedPulling="2026-03-13 20:47:32.025455315 +0000 UTC m=+1212.041537718" observedRunningTime="2026-03-13 20:47:33.423423965 +0000 UTC m=+1213.439506368" watchObservedRunningTime="2026-03-13 20:47:33.427072265 +0000 UTC m=+1213.443154658" Mar 13 20:47:33 crc kubenswrapper[5029]: W0313 20:47:33.458461 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b871cf9_26fb_481d_8404_9c767e53937c.slice/crio-6cef8bbadfcf3f1637ceb7fcd3c8eaf0ce98252ff7d27d48c8dce9c9af75214b WatchSource:0}: Error finding container 6cef8bbadfcf3f1637ceb7fcd3c8eaf0ce98252ff7d27d48c8dce9c9af75214b: Status 404 returned error can't find the container with id 6cef8bbadfcf3f1637ceb7fcd3c8eaf0ce98252ff7d27d48c8dce9c9af75214b Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.685256 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.711806 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.713135 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.714609 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.714790 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.715000 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.715555 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7s6f5" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.775760 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.900603 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.910765 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45jbz\" (UniqueName: \"kubernetes.io/projected/55bdc521-fd20-4ff3-8561-715dd41e604f-kube-api-access-45jbz\") pod \"55bdc521-fd20-4ff3-8561-715dd41e604f\" (UID: \"55bdc521-fd20-4ff3-8561-715dd41e604f\") " Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.911069 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bdc521-fd20-4ff3-8561-715dd41e604f-operator-scripts\") pod \"55bdc521-fd20-4ff3-8561-715dd41e604f\" (UID: \"55bdc521-fd20-4ff3-8561-715dd41e604f\") " Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.911349 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275d8\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-kube-api-access-275d8\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.911424 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.911459 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81a1e5be-bbdf-4a80-a209-3acb956f5c86-lock\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.911494 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a1e5be-bbdf-4a80-a209-3acb956f5c86-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.911521 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.911554 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81a1e5be-bbdf-4a80-a209-3acb956f5c86-cache\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.911924 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55bdc521-fd20-4ff3-8561-715dd41e604f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55bdc521-fd20-4ff3-8561-715dd41e604f" (UID: "55bdc521-fd20-4ff3-8561-715dd41e604f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:33 crc kubenswrapper[5029]: I0313 20:47:33.917653 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55bdc521-fd20-4ff3-8561-715dd41e604f-kube-api-access-45jbz" (OuterVolumeSpecName: "kube-api-access-45jbz") pod "55bdc521-fd20-4ff3-8561-715dd41e604f" (UID: "55bdc521-fd20-4ff3-8561-715dd41e604f"). InnerVolumeSpecName "kube-api-access-45jbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.019510 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s7qt\" (UniqueName: \"kubernetes.io/projected/fb427263-6866-4a0a-ab33-e69f6890b52a-kube-api-access-5s7qt\") pod \"fb427263-6866-4a0a-ab33-e69f6890b52a\" (UID: \"fb427263-6866-4a0a-ab33-e69f6890b52a\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.019561 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb427263-6866-4a0a-ab33-e69f6890b52a-operator-scripts\") pod \"fb427263-6866-4a0a-ab33-e69f6890b52a\" (UID: \"fb427263-6866-4a0a-ab33-e69f6890b52a\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.019900 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.019939 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81a1e5be-bbdf-4a80-a209-3acb956f5c86-lock\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.019973 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a1e5be-bbdf-4a80-a209-3acb956f5c86-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.019998 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.020028 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81a1e5be-bbdf-4a80-a209-3acb956f5c86-cache\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.020061 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275d8\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-kube-api-access-275d8\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.020120 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55bdc521-fd20-4ff3-8561-715dd41e604f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.020132 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45jbz\" (UniqueName: \"kubernetes.io/projected/55bdc521-fd20-4ff3-8561-715dd41e604f-kube-api-access-45jbz\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.020288 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb427263-6866-4a0a-ab33-e69f6890b52a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb427263-6866-4a0a-ab33-e69f6890b52a" (UID: "fb427263-6866-4a0a-ab33-e69f6890b52a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.020310 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: E0313 20:47:34.020502 5029 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 20:47:34 crc kubenswrapper[5029]: E0313 20:47:34.020528 5029 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 20:47:34 crc kubenswrapper[5029]: E0313 20:47:34.020572 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift podName:81a1e5be-bbdf-4a80-a209-3acb956f5c86 nodeName:}" failed. No retries permitted until 2026-03-13 20:47:34.520554895 +0000 UTC m=+1214.536637298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift") pod "swift-storage-0" (UID: "81a1e5be-bbdf-4a80-a209-3acb956f5c86") : configmap "swift-ring-files" not found Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.020739 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81a1e5be-bbdf-4a80-a209-3acb956f5c86-cache\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.020994 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81a1e5be-bbdf-4a80-a209-3acb956f5c86-lock\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.028596 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a1e5be-bbdf-4a80-a209-3acb956f5c86-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.038719 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb427263-6866-4a0a-ab33-e69f6890b52a-kube-api-access-5s7qt" (OuterVolumeSpecName: "kube-api-access-5s7qt") pod "fb427263-6866-4a0a-ab33-e69f6890b52a" (UID: "fb427263-6866-4a0a-ab33-e69f6890b52a"). InnerVolumeSpecName "kube-api-access-5s7qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.038918 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275d8\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-kube-api-access-275d8\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.044272 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.121573 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s7qt\" (UniqueName: \"kubernetes.io/projected/fb427263-6866-4a0a-ab33-e69f6890b52a-kube-api-access-5s7qt\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.121615 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb427263-6866-4a0a-ab33-e69f6890b52a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.138637 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.324258 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-config\") pod \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.324576 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-ovsdbserver-sb\") pod \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.324720 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-dns-svc\") pod \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.324751 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22rdd\" (UniqueName: \"kubernetes.io/projected/176c30d4-7bbd-42ac-a5bf-87b018e669e3-kube-api-access-22rdd\") pod \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\" (UID: \"176c30d4-7bbd-42ac-a5bf-87b018e669e3\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.331149 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176c30d4-7bbd-42ac-a5bf-87b018e669e3-kube-api-access-22rdd" (OuterVolumeSpecName: "kube-api-access-22rdd") pod "176c30d4-7bbd-42ac-a5bf-87b018e669e3" (UID: "176c30d4-7bbd-42ac-a5bf-87b018e669e3"). InnerVolumeSpecName "kube-api-access-22rdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.367123 5029 generic.go:334] "Generic (PLEG): container finished" podID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" containerID="4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6" exitCode=0 Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.367932 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.367967 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" event={"ID":"176c30d4-7bbd-42ac-a5bf-87b018e669e3","Type":"ContainerDied","Data":"4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6"} Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.370360 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-whxvg" event={"ID":"176c30d4-7bbd-42ac-a5bf-87b018e669e3","Type":"ContainerDied","Data":"b6f6bb1cd1df011c2aa1246e6a767f33bd94eee5a40d030233160c75f3df9f74"} Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.370627 5029 scope.go:117] "RemoveContainer" containerID="4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.371998 5029 generic.go:334] "Generic (PLEG): container finished" podID="1b871cf9-26fb-481d-8404-9c767e53937c" containerID="5723ba9a162a9754a90e4fef1199e39f3b33753f8e56d6d546bc211daf54d0a4" exitCode=0 Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.372068 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cbld8" event={"ID":"1b871cf9-26fb-481d-8404-9c767e53937c","Type":"ContainerDied","Data":"5723ba9a162a9754a90e4fef1199e39f3b33753f8e56d6d546bc211daf54d0a4"} Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.372095 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cbld8" event={"ID":"1b871cf9-26fb-481d-8404-9c767e53937c","Type":"ContainerStarted","Data":"6cef8bbadfcf3f1637ceb7fcd3c8eaf0ce98252ff7d27d48c8dce9c9af75214b"} Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.379009 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wwvxt" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.379030 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wwvxt" event={"ID":"55bdc521-fd20-4ff3-8561-715dd41e604f","Type":"ContainerDied","Data":"37b905f2e86f9e1b0f7d4b197cbc003894e8321d04acd9f88f11c261124e7526"} Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.379070 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b905f2e86f9e1b0f7d4b197cbc003894e8321d04acd9f88f11c261124e7526" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.381714 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "176c30d4-7bbd-42ac-a5bf-87b018e669e3" (UID: "176c30d4-7bbd-42ac-a5bf-87b018e669e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.382006 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a338-account-create-update-bfp8h" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.382098 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a338-account-create-update-bfp8h" event={"ID":"fb427263-6866-4a0a-ab33-e69f6890b52a","Type":"ContainerDied","Data":"0a7cb4f7b7adffef853aee65a8a53458af4f8cdc4d6489de3c89791bf93e0d03"} Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.382120 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a7cb4f7b7adffef853aee65a8a53458af4f8cdc4d6489de3c89791bf93e0d03" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.391161 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "176c30d4-7bbd-42ac-a5bf-87b018e669e3" (UID: "176c30d4-7bbd-42ac-a5bf-87b018e669e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.414013 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-config" (OuterVolumeSpecName: "config") pod "176c30d4-7bbd-42ac-a5bf-87b018e669e3" (UID: "176c30d4-7bbd-42ac-a5bf-87b018e669e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.431602 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.431678 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22rdd\" (UniqueName: \"kubernetes.io/projected/176c30d4-7bbd-42ac-a5bf-87b018e669e3-kube-api-access-22rdd\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.431693 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.431706 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176c30d4-7bbd-42ac-a5bf-87b018e669e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.482942 5029 scope.go:117] "RemoveContainer" containerID="85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.507552 5029 scope.go:117] "RemoveContainer" containerID="4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6" Mar 13 20:47:34 crc kubenswrapper[5029]: E0313 20:47:34.509754 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6\": container with ID starting with 4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6 not found: ID does not exist" containerID="4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.509799 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6"} err="failed to get container status \"4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6\": rpc error: code = NotFound desc = could not find container \"4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6\": container with ID starting with 4f7ed7ebbba541e7ff3ad8cf9fb2c8d8e74a1d61161602e2ff806cb4a19ceee6 not found: ID does not exist" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.509830 5029 scope.go:117] "RemoveContainer" containerID="85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e" Mar 13 20:47:34 crc kubenswrapper[5029]: E0313 20:47:34.510104 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e\": container with ID starting with 85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e not found: ID does not exist" containerID="85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.510138 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e"} err="failed to get container status \"85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e\": rpc error: code = NotFound desc = could not find container \"85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e\": container with ID starting with 85d2265124b5f1cfc2ad9e070823ae3f8392c761308038d82ea5dba16abd638e not found: ID does not exist" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.532822 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:34 crc kubenswrapper[5029]: E0313 20:47:34.533147 5029 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 20:47:34 crc kubenswrapper[5029]: E0313 20:47:34.533163 5029 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 20:47:34 crc kubenswrapper[5029]: E0313 20:47:34.533206 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift podName:81a1e5be-bbdf-4a80-a209-3acb956f5c86 nodeName:}" failed. No retries permitted until 2026-03-13 20:47:35.533190989 +0000 UTC m=+1215.549273392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift") pod "swift-storage-0" (UID: "81a1e5be-bbdf-4a80-a209-3acb956f5c86") : configmap "swift-ring-files" not found Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.747022 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-whxvg"] Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.747726 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.756481 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-whxvg"] Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.846807 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhf7f\" (UniqueName: \"kubernetes.io/projected/81e35749-84ef-4c66-ba93-835828ffcbda-kube-api-access-bhf7f\") pod \"81e35749-84ef-4c66-ba93-835828ffcbda\" (UID: \"81e35749-84ef-4c66-ba93-835828ffcbda\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.847148 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e35749-84ef-4c66-ba93-835828ffcbda-operator-scripts\") pod \"81e35749-84ef-4c66-ba93-835828ffcbda\" (UID: \"81e35749-84ef-4c66-ba93-835828ffcbda\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.847570 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e35749-84ef-4c66-ba93-835828ffcbda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81e35749-84ef-4c66-ba93-835828ffcbda" (UID: "81e35749-84ef-4c66-ba93-835828ffcbda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.859015 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e35749-84ef-4c66-ba93-835828ffcbda-kube-api-access-bhf7f" (OuterVolumeSpecName: "kube-api-access-bhf7f") pod "81e35749-84ef-4c66-ba93-835828ffcbda" (UID: "81e35749-84ef-4c66-ba93-835828ffcbda"). InnerVolumeSpecName "kube-api-access-bhf7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.940312 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.948938 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mcf\" (UniqueName: \"kubernetes.io/projected/205e0049-29c0-4ebc-8cb3-670e58c8af28-kube-api-access-57mcf\") pod \"205e0049-29c0-4ebc-8cb3-670e58c8af28\" (UID: \"205e0049-29c0-4ebc-8cb3-670e58c8af28\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.949271 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e0049-29c0-4ebc-8cb3-670e58c8af28-operator-scripts\") pod \"205e0049-29c0-4ebc-8cb3-670e58c8af28\" (UID: \"205e0049-29c0-4ebc-8cb3-670e58c8af28\") " Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.949632 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e35749-84ef-4c66-ba93-835828ffcbda-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.949650 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhf7f\" (UniqueName: \"kubernetes.io/projected/81e35749-84ef-4c66-ba93-835828ffcbda-kube-api-access-bhf7f\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.953396 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205e0049-29c0-4ebc-8cb3-670e58c8af28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "205e0049-29c0-4ebc-8cb3-670e58c8af28" (UID: "205e0049-29c0-4ebc-8cb3-670e58c8af28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:34 crc kubenswrapper[5029]: I0313 20:47:34.962781 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205e0049-29c0-4ebc-8cb3-670e58c8af28-kube-api-access-57mcf" (OuterVolumeSpecName: "kube-api-access-57mcf") pod "205e0049-29c0-4ebc-8cb3-670e58c8af28" (UID: "205e0049-29c0-4ebc-8cb3-670e58c8af28"). InnerVolumeSpecName "kube-api-access-57mcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.053219 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e0049-29c0-4ebc-8cb3-670e58c8af28-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.053289 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mcf\" (UniqueName: \"kubernetes.io/projected/205e0049-29c0-4ebc-8cb3-670e58c8af28-kube-api-access-57mcf\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.224256 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-d4mwd"] Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.224701 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" containerName="init" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.224720 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" containerName="init" Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.224771 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb427263-6866-4a0a-ab33-e69f6890b52a" containerName="mariadb-account-create-update" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.224780 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb427263-6866-4a0a-ab33-e69f6890b52a" containerName="mariadb-account-create-update" Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.224791 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bdc521-fd20-4ff3-8561-715dd41e604f" containerName="mariadb-database-create" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.224797 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bdc521-fd20-4ff3-8561-715dd41e604f" containerName="mariadb-database-create" Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.224839 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e35749-84ef-4c66-ba93-835828ffcbda" containerName="mariadb-account-create-update" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.224858 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e35749-84ef-4c66-ba93-835828ffcbda" containerName="mariadb-account-create-update" Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.224870 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" containerName="dnsmasq-dns" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.224876 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" containerName="dnsmasq-dns" Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.224885 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e0049-29c0-4ebc-8cb3-670e58c8af28" containerName="mariadb-database-create" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.224893 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e0049-29c0-4ebc-8cb3-670e58c8af28" containerName="mariadb-database-create" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.225227 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb427263-6866-4a0a-ab33-e69f6890b52a" containerName="mariadb-account-create-update" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.225251 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" containerName="dnsmasq-dns" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.225287 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e0049-29c0-4ebc-8cb3-670e58c8af28" containerName="mariadb-database-create" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.225296 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="55bdc521-fd20-4ff3-8561-715dd41e604f" containerName="mariadb-database-create" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.225305 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e35749-84ef-4c66-ba93-835828ffcbda" containerName="mariadb-account-create-update" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.226425 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.226435 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.229025 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4g67r" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.229132 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.234563 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pswbz" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.242428 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d4mwd"] Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.325845 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e5e0db0-ec13-4d33-9c31-311982a5d598-operator-scripts\") pod \"1e5e0db0-ec13-4d33-9c31-311982a5d598\" (UID: \"1e5e0db0-ec13-4d33-9c31-311982a5d598\") " Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.325926 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ljcl\" (UniqueName: \"kubernetes.io/projected/d6a13c94-9043-44b0-a90a-0f6b60863453-kube-api-access-4ljcl\") pod \"d6a13c94-9043-44b0-a90a-0f6b60863453\" (UID: \"d6a13c94-9043-44b0-a90a-0f6b60863453\") " Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.326010 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv95q\" (UniqueName: \"kubernetes.io/projected/1e5e0db0-ec13-4d33-9c31-311982a5d598-kube-api-access-cv95q\") pod \"1e5e0db0-ec13-4d33-9c31-311982a5d598\" (UID: \"1e5e0db0-ec13-4d33-9c31-311982a5d598\") " Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.326087 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a13c94-9043-44b0-a90a-0f6b60863453-operator-scripts\") pod \"d6a13c94-9043-44b0-a90a-0f6b60863453\" (UID: \"d6a13c94-9043-44b0-a90a-0f6b60863453\") " Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.326377 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xls\" (UniqueName: \"kubernetes.io/projected/f3fdc768-348b-4581-a918-a009351efeee-kube-api-access-c6xls\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.326415 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-config-data\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.326512 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-combined-ca-bundle\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.326535 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-db-sync-config-data\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.326536 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e5e0db0-ec13-4d33-9c31-311982a5d598-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e5e0db0-ec13-4d33-9c31-311982a5d598" (UID: "1e5e0db0-ec13-4d33-9c31-311982a5d598"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.326869 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a13c94-9043-44b0-a90a-0f6b60863453-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6a13c94-9043-44b0-a90a-0f6b60863453" (UID: "d6a13c94-9043-44b0-a90a-0f6b60863453"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.333162 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5e0db0-ec13-4d33-9c31-311982a5d598-kube-api-access-cv95q" (OuterVolumeSpecName: "kube-api-access-cv95q") pod "1e5e0db0-ec13-4d33-9c31-311982a5d598" (UID: "1e5e0db0-ec13-4d33-9c31-311982a5d598"). InnerVolumeSpecName "kube-api-access-cv95q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.334195 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a13c94-9043-44b0-a90a-0f6b60863453-kube-api-access-4ljcl" (OuterVolumeSpecName: "kube-api-access-4ljcl") pod "d6a13c94-9043-44b0-a90a-0f6b60863453" (UID: "d6a13c94-9043-44b0-a90a-0f6b60863453"). InnerVolumeSpecName "kube-api-access-4ljcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.389601 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8fe-account-create-update-srjsm" event={"ID":"81e35749-84ef-4c66-ba93-835828ffcbda","Type":"ContainerDied","Data":"8d21a6c67027d12b41a69ac85aee10c3902ba3cf46105fe086ce9be8ab14c34a"} Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.389639 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d21a6c67027d12b41a69ac85aee10c3902ba3cf46105fe086ce9be8ab14c34a" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.389694 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8fe-account-create-update-srjsm" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.399084 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pswbz" event={"ID":"1e5e0db0-ec13-4d33-9c31-311982a5d598","Type":"ContainerDied","Data":"a077fe290beeb04d7009479842b35e881888264492ad073c5c062d78b2398aa8"} Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.399124 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a077fe290beeb04d7009479842b35e881888264492ad073c5c062d78b2398aa8" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.399187 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pswbz" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.402761 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rgf5k" event={"ID":"205e0049-29c0-4ebc-8cb3-670e58c8af28","Type":"ContainerDied","Data":"63b5510e0ae82607a8b9458b76f7baa5a9b6c48d7daca050ee0c7252240c21ac"} Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.402812 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b5510e0ae82607a8b9458b76f7baa5a9b6c48d7daca050ee0c7252240c21ac" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.403075 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rgf5k" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.412358 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cbld8" event={"ID":"1b871cf9-26fb-481d-8404-9c767e53937c","Type":"ContainerStarted","Data":"f9477643abaa9c7938b07177b1184fd19ebef5e684476848a80097c91612d7db"} Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.419301 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1a94-account-create-update-phrtc" event={"ID":"d6a13c94-9043-44b0-a90a-0f6b60863453","Type":"ContainerDied","Data":"5b8a00803966d5592155880091dc5bf9384167c3ab151e503b4ccee70193f14e"} Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.419356 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1a94-account-create-update-phrtc" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.419367 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8a00803966d5592155880091dc5bf9384167c3ab151e503b4ccee70193f14e" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.427908 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-config-data\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.428078 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-combined-ca-bundle\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.428120 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-db-sync-config-data\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.428197 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6xls\" (UniqueName: \"kubernetes.io/projected/f3fdc768-348b-4581-a918-a009351efeee-kube-api-access-c6xls\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.428258 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e5e0db0-ec13-4d33-9c31-311982a5d598-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.428273 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ljcl\" (UniqueName: \"kubernetes.io/projected/d6a13c94-9043-44b0-a90a-0f6b60863453-kube-api-access-4ljcl\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.428287 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv95q\" (UniqueName: \"kubernetes.io/projected/1e5e0db0-ec13-4d33-9c31-311982a5d598-kube-api-access-cv95q\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.428301 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a13c94-9043-44b0-a90a-0f6b60863453-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.434688 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-combined-ca-bundle\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.436924 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-db-sync-config-data\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.437233 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-config-data\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.448282 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6xls\" (UniqueName: \"kubernetes.io/projected/f3fdc768-348b-4581-a918-a009351efeee-kube-api-access-c6xls\") pod \"glance-db-sync-d4mwd\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.525271 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e5e0db0_ec13_4d33_9c31_311982a5d598.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6a13c94_9043_44b0_a90a_0f6b60863453.slice/crio-5b8a00803966d5592155880091dc5bf9384167c3ab151e503b4ccee70193f14e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod205e0049_29c0_4ebc_8cb3_670e58c8af28.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6a13c94_9043_44b0_a90a_0f6b60863453.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81e35749_84ef_4c66_ba93_835828ffcbda.slice\": RecentStats: unable to find data in memory cache]" Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.631404 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.631621 5029 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.632210 5029 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 20:47:35 crc kubenswrapper[5029]: E0313 20:47:35.632315 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift podName:81a1e5be-bbdf-4a80-a209-3acb956f5c86 nodeName:}" failed. No retries permitted until 2026-03-13 20:47:37.63228184 +0000 UTC m=+1217.648364263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift") pod "swift-storage-0" (UID: "81a1e5be-bbdf-4a80-a209-3acb956f5c86") : configmap "swift-ring-files" not found Mar 13 20:47:35 crc kubenswrapper[5029]: I0313 20:47:35.735150 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d4mwd" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.627192 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176c30d4-7bbd-42ac-a5bf-87b018e669e3" path="/var/lib/kubelet/pods/176c30d4-7bbd-42ac-a5bf-87b018e669e3/volumes" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.698106 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-59vfk"] Mar 13 20:47:36 crc kubenswrapper[5029]: E0313 20:47:36.698650 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a13c94-9043-44b0-a90a-0f6b60863453" containerName="mariadb-account-create-update" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.698680 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a13c94-9043-44b0-a90a-0f6b60863453" containerName="mariadb-account-create-update" Mar 13 20:47:36 crc kubenswrapper[5029]: E0313 20:47:36.698696 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5e0db0-ec13-4d33-9c31-311982a5d598" containerName="mariadb-database-create" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.698705 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5e0db0-ec13-4d33-9c31-311982a5d598" containerName="mariadb-database-create" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.698987 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5e0db0-ec13-4d33-9c31-311982a5d598" containerName="mariadb-database-create" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.699009 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a13c94-9043-44b0-a90a-0f6b60863453" containerName="mariadb-account-create-update" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.699902 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.703053 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.708088 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-59vfk"] Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.853668 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tdpc\" (UniqueName: \"kubernetes.io/projected/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-kube-api-access-6tdpc\") pod \"root-account-create-update-59vfk\" (UID: \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\") " pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.854146 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-operator-scripts\") pod \"root-account-create-update-59vfk\" (UID: \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\") " pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.906619 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d4mwd"] Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.956060 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-operator-scripts\") pod \"root-account-create-update-59vfk\" (UID: \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\") " pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.956129 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tdpc\" (UniqueName: \"kubernetes.io/projected/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-kube-api-access-6tdpc\") pod \"root-account-create-update-59vfk\" (UID: \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\") " pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.956947 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-operator-scripts\") pod \"root-account-create-update-59vfk\" (UID: \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\") " pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:36 crc kubenswrapper[5029]: I0313 20:47:36.979942 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tdpc\" (UniqueName: \"kubernetes.io/projected/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-kube-api-access-6tdpc\") pod \"root-account-create-update-59vfk\" (UID: \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\") " pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.028342 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.439148 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d4mwd" event={"ID":"f3fdc768-348b-4581-a918-a009351efeee","Type":"ContainerStarted","Data":"983c5aea2df6abc1270d34f867062fec996478072553539147237d902d59eb90"} Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.439217 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.486418 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-cbld8" podStartSLOduration=5.486398802 podStartE2EDuration="5.486398802s" podCreationTimestamp="2026-03-13 20:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:47:37.457292648 +0000 UTC m=+1217.473375071" watchObservedRunningTime="2026-03-13 20:47:37.486398802 +0000 UTC m=+1217.502481205" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.487978 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-59vfk"] Mar 13 20:47:37 crc kubenswrapper[5029]: W0313 20:47:37.491644 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50e3e1ab_c51d_4ad9_8f2a_9a2c59626642.slice/crio-8484c6fa8e7f0680b471cee35b0b584b85a5898465f76b01e3b487cbd7e835f1 WatchSource:0}: Error finding container 8484c6fa8e7f0680b471cee35b0b584b85a5898465f76b01e3b487cbd7e835f1: Status 404 returned error can't find the container with id 8484c6fa8e7f0680b471cee35b0b584b85a5898465f76b01e3b487cbd7e835f1 Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.620043 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2phnh"] Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.621299 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.623511 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.624120 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.624878 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.642939 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2phnh"] Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.670150 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:37 crc kubenswrapper[5029]: E0313 20:47:37.670730 5029 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 20:47:37 crc kubenswrapper[5029]: E0313 20:47:37.670755 5029 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 20:47:37 crc kubenswrapper[5029]: E0313 20:47:37.670810 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift podName:81a1e5be-bbdf-4a80-a209-3acb956f5c86 nodeName:}" failed. No retries permitted until 2026-03-13 20:47:41.670792116 +0000 UTC m=+1221.686874519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift") pod "swift-storage-0" (UID: "81a1e5be-bbdf-4a80-a209-3acb956f5c86") : configmap "swift-ring-files" not found Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.772124 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-scripts\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.772242 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-ring-data-devices\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.772282 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-dispersionconf\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.772299 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n8rg\" (UniqueName: \"kubernetes.io/projected/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-kube-api-access-6n8rg\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.772327 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-combined-ca-bundle\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.772385 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-swiftconf\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.772431 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-etc-swift\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.874550 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-ring-data-devices\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.874636 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-dispersionconf\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.874656 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n8rg\" (UniqueName: \"kubernetes.io/projected/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-kube-api-access-6n8rg\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.874687 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-combined-ca-bundle\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.874760 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-swiftconf\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.874816 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-etc-swift\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.874870 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-scripts\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.875396 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-etc-swift\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.875502 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-ring-data-devices\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.877405 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-scripts\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.879578 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-dispersionconf\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.880149 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-combined-ca-bundle\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.883456 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-swiftconf\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.893956 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n8rg\" (UniqueName: \"kubernetes.io/projected/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-kube-api-access-6n8rg\") pod \"swift-ring-rebalance-2phnh\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:37 crc kubenswrapper[5029]: I0313 20:47:37.938729 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:38 crc kubenswrapper[5029]: I0313 20:47:38.275052 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:38 crc kubenswrapper[5029]: I0313 20:47:38.420642 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2phnh"] Mar 13 20:47:38 crc kubenswrapper[5029]: W0313 20:47:38.430104 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68daffaa_8e1e_4af0_99e1_6fe5b9aa04b6.slice/crio-83975799eb5987cdf276205a42ee5a125ea3267b3bfb8c21cf83cee0f31fbac0 WatchSource:0}: Error finding container 83975799eb5987cdf276205a42ee5a125ea3267b3bfb8c21cf83cee0f31fbac0: Status 404 returned error can't find the container with id 83975799eb5987cdf276205a42ee5a125ea3267b3bfb8c21cf83cee0f31fbac0 Mar 13 20:47:38 crc kubenswrapper[5029]: I0313 20:47:38.472012 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2phnh" event={"ID":"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6","Type":"ContainerStarted","Data":"83975799eb5987cdf276205a42ee5a125ea3267b3bfb8c21cf83cee0f31fbac0"} Mar 13 20:47:38 crc kubenswrapper[5029]: I0313 20:47:38.479076 5029 generic.go:334] "Generic (PLEG): container finished" podID="50e3e1ab-c51d-4ad9-8f2a-9a2c59626642" containerID="983184384b28336161891a392548956c22d8f6d8f3212b2185a18d18739da541" exitCode=0 Mar 13 20:47:38 crc kubenswrapper[5029]: I0313 20:47:38.480151 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59vfk" event={"ID":"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642","Type":"ContainerDied","Data":"983184384b28336161891a392548956c22d8f6d8f3212b2185a18d18739da541"} Mar 13 20:47:38 crc kubenswrapper[5029]: I0313 20:47:38.480181 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59vfk" event={"ID":"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642","Type":"ContainerStarted","Data":"8484c6fa8e7f0680b471cee35b0b584b85a5898465f76b01e3b487cbd7e835f1"} Mar 13 20:47:39 crc kubenswrapper[5029]: I0313 20:47:39.920905 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.021558 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tdpc\" (UniqueName: \"kubernetes.io/projected/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-kube-api-access-6tdpc\") pod \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\" (UID: \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\") " Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.021771 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-operator-scripts\") pod \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\" (UID: \"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642\") " Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.026671 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50e3e1ab-c51d-4ad9-8f2a-9a2c59626642" (UID: "50e3e1ab-c51d-4ad9-8f2a-9a2c59626642"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.031468 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-kube-api-access-6tdpc" (OuterVolumeSpecName: "kube-api-access-6tdpc") pod "50e3e1ab-c51d-4ad9-8f2a-9a2c59626642" (UID: "50e3e1ab-c51d-4ad9-8f2a-9a2c59626642"). InnerVolumeSpecName "kube-api-access-6tdpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.123711 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tdpc\" (UniqueName: \"kubernetes.io/projected/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-kube-api-access-6tdpc\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.123748 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.511617 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-59vfk" event={"ID":"50e3e1ab-c51d-4ad9-8f2a-9a2c59626642","Type":"ContainerDied","Data":"8484c6fa8e7f0680b471cee35b0b584b85a5898465f76b01e3b487cbd7e835f1"} Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.512114 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8484c6fa8e7f0680b471cee35b0b584b85a5898465f76b01e3b487cbd7e835f1" Mar 13 20:47:40 crc kubenswrapper[5029]: I0313 20:47:40.512235 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-59vfk" Mar 13 20:47:41 crc kubenswrapper[5029]: I0313 20:47:41.754060 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:41 crc kubenswrapper[5029]: E0313 20:47:41.754302 5029 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 20:47:41 crc kubenswrapper[5029]: E0313 20:47:41.754315 5029 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 20:47:41 crc kubenswrapper[5029]: E0313 20:47:41.754352 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift podName:81a1e5be-bbdf-4a80-a209-3acb956f5c86 nodeName:}" failed. No retries permitted until 2026-03-13 20:47:49.754339885 +0000 UTC m=+1229.770422288 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift") pod "swift-storage-0" (UID: "81a1e5be-bbdf-4a80-a209-3acb956f5c86") : configmap "swift-ring-files" not found Mar 13 20:47:42 crc kubenswrapper[5029]: I0313 20:47:42.900506 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:47:42 crc kubenswrapper[5029]: I0313 20:47:42.957418 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wcgc6"] Mar 13 20:47:42 crc kubenswrapper[5029]: I0313 20:47:42.957685 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" podUID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerName="dnsmasq-dns" containerID="cri-o://aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303" gracePeriod=10 Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.132044 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-59vfk"] Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.138420 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-59vfk"] Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.448839 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.544288 5029 generic.go:334] "Generic (PLEG): container finished" podID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerID="aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303" exitCode=0 Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.544361 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" event={"ID":"70c7e4de-b839-4da7-91a8-474a4a5fd16f","Type":"ContainerDied","Data":"aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303"} Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.544411 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" event={"ID":"70c7e4de-b839-4da7-91a8-474a4a5fd16f","Type":"ContainerDied","Data":"bfa377bebbd63e289364d2e83486511c23f52f050f4aa8008e7343558369243f"} Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.544429 5029 scope.go:117] "RemoveContainer" containerID="aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.544563 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.549387 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2phnh" event={"ID":"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6","Type":"ContainerStarted","Data":"2c4bf9514623347a9437f13d9d5aa998d9632289add8a2287300f31b267f8a8b"} Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.567076 5029 scope.go:117] "RemoveContainer" containerID="9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.574086 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2phnh" podStartSLOduration=2.112150891 podStartE2EDuration="6.574058408s" podCreationTimestamp="2026-03-13 20:47:37 +0000 UTC" firstStartedPulling="2026-03-13 20:47:38.433552577 +0000 UTC m=+1218.449634970" lastFinishedPulling="2026-03-13 20:47:42.895460084 +0000 UTC m=+1222.911542487" observedRunningTime="2026-03-13 20:47:43.567513729 +0000 UTC m=+1223.583596152" watchObservedRunningTime="2026-03-13 20:47:43.574058408 +0000 UTC m=+1223.590140811" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.583985 5029 scope.go:117] "RemoveContainer" containerID="aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303" Mar 13 20:47:43 crc kubenswrapper[5029]: E0313 20:47:43.584506 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303\": container with ID starting with aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303 not found: ID does not exist" containerID="aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.584552 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303"} err="failed to get container status \"aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303\": rpc error: code = NotFound desc = could not find container \"aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303\": container with ID starting with aa3d07957b74d1d89c7717d0f6a88e26adea302e194f001a0bd41206fac06303 not found: ID does not exist" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.584578 5029 scope.go:117] "RemoveContainer" containerID="9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e" Mar 13 20:47:43 crc kubenswrapper[5029]: E0313 20:47:43.585023 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e\": container with ID starting with 9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e not found: ID does not exist" containerID="9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.585054 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e"} err="failed to get container status \"9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e\": rpc error: code = NotFound desc = could not find container \"9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e\": container with ID starting with 9b84fb4d0c7757e7c5acb1605fb4f22e235e53f23082bc4e200f806ff1464b6e not found: ID does not exist" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.608826 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-config\") pod \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.609067 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fctwk\" (UniqueName: \"kubernetes.io/projected/70c7e4de-b839-4da7-91a8-474a4a5fd16f-kube-api-access-fctwk\") pod \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.609106 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-nb\") pod \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.609171 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-dns-svc\") pod \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.609215 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-sb\") pod \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\" (UID: \"70c7e4de-b839-4da7-91a8-474a4a5fd16f\") " Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.615145 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c7e4de-b839-4da7-91a8-474a4a5fd16f-kube-api-access-fctwk" (OuterVolumeSpecName: "kube-api-access-fctwk") pod "70c7e4de-b839-4da7-91a8-474a4a5fd16f" (UID: "70c7e4de-b839-4da7-91a8-474a4a5fd16f"). InnerVolumeSpecName "kube-api-access-fctwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.648422 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70c7e4de-b839-4da7-91a8-474a4a5fd16f" (UID: "70c7e4de-b839-4da7-91a8-474a4a5fd16f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.649873 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-config" (OuterVolumeSpecName: "config") pod "70c7e4de-b839-4da7-91a8-474a4a5fd16f" (UID: "70c7e4de-b839-4da7-91a8-474a4a5fd16f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.651799 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70c7e4de-b839-4da7-91a8-474a4a5fd16f" (UID: "70c7e4de-b839-4da7-91a8-474a4a5fd16f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.655499 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70c7e4de-b839-4da7-91a8-474a4a5fd16f" (UID: "70c7e4de-b839-4da7-91a8-474a4a5fd16f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.712178 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.712223 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.712240 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.712256 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fctwk\" (UniqueName: \"kubernetes.io/projected/70c7e4de-b839-4da7-91a8-474a4a5fd16f-kube-api-access-fctwk\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.712267 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c7e4de-b839-4da7-91a8-474a4a5fd16f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.882993 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wcgc6"] Mar 13 20:47:43 crc kubenswrapper[5029]: I0313 20:47:43.893477 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wcgc6"] Mar 13 20:47:44 crc kubenswrapper[5029]: I0313 20:47:44.609553 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e3e1ab-c51d-4ad9-8f2a-9a2c59626642" path="/var/lib/kubelet/pods/50e3e1ab-c51d-4ad9-8f2a-9a2c59626642/volumes" Mar 13 20:47:44 crc kubenswrapper[5029]: I0313 20:47:44.610351 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" path="/var/lib/kubelet/pods/70c7e4de-b839-4da7-91a8-474a4a5fd16f/volumes" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.159813 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h227t"] Mar 13 20:47:48 crc kubenswrapper[5029]: E0313 20:47:48.160930 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerName="dnsmasq-dns" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.160972 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerName="dnsmasq-dns" Mar 13 20:47:48 crc kubenswrapper[5029]: E0313 20:47:48.161048 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e3e1ab-c51d-4ad9-8f2a-9a2c59626642" containerName="mariadb-account-create-update" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.161057 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e3e1ab-c51d-4ad9-8f2a-9a2c59626642" containerName="mariadb-account-create-update" Mar 13 20:47:48 crc kubenswrapper[5029]: E0313 20:47:48.161070 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerName="init" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.161076 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerName="init" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.161381 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerName="dnsmasq-dns" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.161407 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e3e1ab-c51d-4ad9-8f2a-9a2c59626642" containerName="mariadb-account-create-update" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.162477 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h227t" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.165176 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.175142 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h227t"] Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.274102 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-wcgc6" podUID="70c7e4de-b839-4da7-91a8-474a4a5fd16f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.294426 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cceec7e-5b5a-45b4-8480-9f44ce88107a-operator-scripts\") pod \"root-account-create-update-h227t\" (UID: \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\") " pod="openstack/root-account-create-update-h227t" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.295047 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x429k\" (UniqueName: \"kubernetes.io/projected/9cceec7e-5b5a-45b4-8480-9f44ce88107a-kube-api-access-x429k\") pod \"root-account-create-update-h227t\" (UID: \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\") " pod="openstack/root-account-create-update-h227t" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.397300 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x429k\" (UniqueName: \"kubernetes.io/projected/9cceec7e-5b5a-45b4-8480-9f44ce88107a-kube-api-access-x429k\") pod \"root-account-create-update-h227t\" (UID: \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\") " pod="openstack/root-account-create-update-h227t" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.397380 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cceec7e-5b5a-45b4-8480-9f44ce88107a-operator-scripts\") pod \"root-account-create-update-h227t\" (UID: \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\") " pod="openstack/root-account-create-update-h227t" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.398423 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cceec7e-5b5a-45b4-8480-9f44ce88107a-operator-scripts\") pod \"root-account-create-update-h227t\" (UID: \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\") " pod="openstack/root-account-create-update-h227t" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.423064 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x429k\" (UniqueName: \"kubernetes.io/projected/9cceec7e-5b5a-45b4-8480-9f44ce88107a-kube-api-access-x429k\") pod \"root-account-create-update-h227t\" (UID: \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\") " pod="openstack/root-account-create-update-h227t" Mar 13 20:47:48 crc kubenswrapper[5029]: I0313 20:47:48.498486 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h227t" Mar 13 20:47:49 crc kubenswrapper[5029]: I0313 20:47:49.822174 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:47:49 crc kubenswrapper[5029]: E0313 20:47:49.822358 5029 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 20:47:49 crc kubenswrapper[5029]: E0313 20:47:49.822596 5029 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 20:47:49 crc kubenswrapper[5029]: E0313 20:47:49.822658 5029 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift podName:81a1e5be-bbdf-4a80-a209-3acb956f5c86 nodeName:}" failed. No retries permitted until 2026-03-13 20:48:05.822641246 +0000 UTC m=+1245.838723649 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift") pod "swift-storage-0" (UID: "81a1e5be-bbdf-4a80-a209-3acb956f5c86") : configmap "swift-ring-files" not found Mar 13 20:47:50 crc kubenswrapper[5029]: I0313 20:47:50.000438 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 13 20:47:52 crc kubenswrapper[5029]: I0313 20:47:52.620183 5029 generic.go:334] "Generic (PLEG): container finished" podID="68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" containerID="2c4bf9514623347a9437f13d9d5aa998d9632289add8a2287300f31b267f8a8b" exitCode=0 Mar 13 20:47:52 crc kubenswrapper[5029]: I0313 20:47:52.621053 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2phnh" event={"ID":"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6","Type":"ContainerDied","Data":"2c4bf9514623347a9437f13d9d5aa998d9632289add8a2287300f31b267f8a8b"} Mar 13 20:47:52 crc kubenswrapper[5029]: I0313 20:47:52.621279 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h227t"] Mar 13 20:47:52 crc kubenswrapper[5029]: W0313 20:47:52.628066 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cceec7e_5b5a_45b4_8480_9f44ce88107a.slice/crio-ce09aaafbe93325eb0e417953a92474f2c19444613871ded3b186b46e0e5791b WatchSource:0}: Error finding container ce09aaafbe93325eb0e417953a92474f2c19444613871ded3b186b46e0e5791b: Status 404 returned error can't find the container with id ce09aaafbe93325eb0e417953a92474f2c19444613871ded3b186b46e0e5791b Mar 13 20:47:53 crc kubenswrapper[5029]: I0313 20:47:53.632316 5029 generic.go:334] "Generic (PLEG): container finished" podID="9cceec7e-5b5a-45b4-8480-9f44ce88107a" containerID="3bc964d09ae70bbe583878d31a8e5fcdf1551597251dc04c22a2c5259743d3d7" exitCode=0 Mar 13 20:47:53 crc kubenswrapper[5029]: I0313 20:47:53.632544 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h227t" event={"ID":"9cceec7e-5b5a-45b4-8480-9f44ce88107a","Type":"ContainerDied","Data":"3bc964d09ae70bbe583878d31a8e5fcdf1551597251dc04c22a2c5259743d3d7"} Mar 13 20:47:53 crc kubenswrapper[5029]: I0313 20:47:53.632661 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h227t" event={"ID":"9cceec7e-5b5a-45b4-8480-9f44ce88107a","Type":"ContainerStarted","Data":"ce09aaafbe93325eb0e417953a92474f2c19444613871ded3b186b46e0e5791b"} Mar 13 20:47:53 crc kubenswrapper[5029]: I0313 20:47:53.636505 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d4mwd" event={"ID":"f3fdc768-348b-4581-a918-a009351efeee","Type":"ContainerStarted","Data":"e854972751227e9daeb84215df7b13d62235c8bb747460fc4e9973d069bd1198"} Mar 13 20:47:53 crc kubenswrapper[5029]: I0313 20:47:53.671266 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-d4mwd" podStartSLOduration=3.31311606 podStartE2EDuration="18.671250212s" podCreationTimestamp="2026-03-13 20:47:35 +0000 UTC" firstStartedPulling="2026-03-13 20:47:36.919991501 +0000 UTC m=+1216.936073904" lastFinishedPulling="2026-03-13 20:47:52.278125653 +0000 UTC m=+1232.294208056" observedRunningTime="2026-03-13 20:47:53.669157855 +0000 UTC m=+1233.685240278" watchObservedRunningTime="2026-03-13 20:47:53.671250212 +0000 UTC m=+1233.687332615" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:53.990093 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.121605 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-combined-ca-bundle\") pod \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.121661 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-etc-swift\") pod \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.121700 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n8rg\" (UniqueName: \"kubernetes.io/projected/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-kube-api-access-6n8rg\") pod \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.121727 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-ring-data-devices\") pod \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.121758 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-scripts\") pod \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.121809 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-dispersionconf\") pod \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.121862 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-swiftconf\") pod \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\" (UID: \"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.122762 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" (UID: "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.123616 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" (UID: "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.127991 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-kube-api-access-6n8rg" (OuterVolumeSpecName: "kube-api-access-6n8rg") pod "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" (UID: "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6"). InnerVolumeSpecName "kube-api-access-6n8rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.133123 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" (UID: "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.145400 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" (UID: "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.145962 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" (UID: "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.146046 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-scripts" (OuterVolumeSpecName: "scripts") pod "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" (UID: "68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.224297 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.224328 5029 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.224337 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n8rg\" (UniqueName: \"kubernetes.io/projected/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-kube-api-access-6n8rg\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.224348 5029 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.224359 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.224366 5029 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.224374 5029 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.647109 5029 generic.go:334] "Generic (PLEG): container finished" podID="016118a1-8825-4373-a487-2fa17c45488a" containerID="69b8d86fa5c0171e8ea41bc86941b1160f2a6de1cd11c89e37ba71b2ab3e9d1b" exitCode=0 Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.647632 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"016118a1-8825-4373-a487-2fa17c45488a","Type":"ContainerDied","Data":"69b8d86fa5c0171e8ea41bc86941b1160f2a6de1cd11c89e37ba71b2ab3e9d1b"} Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.658209 5029 generic.go:334] "Generic (PLEG): container finished" podID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" containerID="f101418f370ae7a45ed8ce6c68a911416c7a732eaaa28c1cf01622c29ce93a94" exitCode=0 Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.658331 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ff0edef-42cf-4ba2-b170-87cfdd6deefb","Type":"ContainerDied","Data":"f101418f370ae7a45ed8ce6c68a911416c7a732eaaa28c1cf01622c29ce93a94"} Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.676916 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2phnh" event={"ID":"68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6","Type":"ContainerDied","Data":"83975799eb5987cdf276205a42ee5a125ea3267b3bfb8c21cf83cee0f31fbac0"} Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.676996 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83975799eb5987cdf276205a42ee5a125ea3267b3bfb8c21cf83cee0f31fbac0" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:54.676949 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2phnh" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.324772 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h227t" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.501507 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cceec7e-5b5a-45b4-8480-9f44ce88107a-operator-scripts\") pod \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\" (UID: \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.501735 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x429k\" (UniqueName: \"kubernetes.io/projected/9cceec7e-5b5a-45b4-8480-9f44ce88107a-kube-api-access-x429k\") pod \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\" (UID: \"9cceec7e-5b5a-45b4-8480-9f44ce88107a\") " Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.502637 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cceec7e-5b5a-45b4-8480-9f44ce88107a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cceec7e-5b5a-45b4-8480-9f44ce88107a" (UID: "9cceec7e-5b5a-45b4-8480-9f44ce88107a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.507574 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cceec7e-5b5a-45b4-8480-9f44ce88107a-kube-api-access-x429k" (OuterVolumeSpecName: "kube-api-access-x429k") pod "9cceec7e-5b5a-45b4-8480-9f44ce88107a" (UID: "9cceec7e-5b5a-45b4-8480-9f44ce88107a"). InnerVolumeSpecName "kube-api-access-x429k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.538840 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xvrv7" podUID="09599f34-8760-4612-9d50-925aeb8134b4" containerName="ovn-controller" probeResult="failure" output=< Mar 13 20:47:55 crc kubenswrapper[5029]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 13 20:47:55 crc kubenswrapper[5029]: > Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.603802 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x429k\" (UniqueName: \"kubernetes.io/projected/9cceec7e-5b5a-45b4-8480-9f44ce88107a-kube-api-access-x429k\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.603842 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cceec7e-5b5a-45b4-8480-9f44ce88107a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.604369 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.627344 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bj9ld" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.686456 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"016118a1-8825-4373-a487-2fa17c45488a","Type":"ContainerStarted","Data":"e63b2d48cb795baddd2da68a783532d53775c5d80b3c7700de4f8fdde679face"} Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.686730 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.689689 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ff0edef-42cf-4ba2-b170-87cfdd6deefb","Type":"ContainerStarted","Data":"d260845f68e5a0c459e71dd0e85ae681987421511bd7bc9fb3a2e532dad36481"} Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.689917 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.691440 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h227t" event={"ID":"9cceec7e-5b5a-45b4-8480-9f44ce88107a","Type":"ContainerDied","Data":"ce09aaafbe93325eb0e417953a92474f2c19444613871ded3b186b46e0e5791b"} Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.691568 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce09aaafbe93325eb0e417953a92474f2c19444613871ded3b186b46e0e5791b" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.691478 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h227t" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.714512 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.01538807 podStartE2EDuration="1m10.714490956s" podCreationTimestamp="2026-03-13 20:46:45 +0000 UTC" firstStartedPulling="2026-03-13 20:46:47.577607083 +0000 UTC m=+1167.593689486" lastFinishedPulling="2026-03-13 20:47:20.276709969 +0000 UTC m=+1200.292792372" observedRunningTime="2026-03-13 20:47:55.713101509 +0000 UTC m=+1235.729183902" watchObservedRunningTime="2026-03-13 20:47:55.714490956 +0000 UTC m=+1235.730573359" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.753991 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.874864352 podStartE2EDuration="1m9.753968874s" podCreationTimestamp="2026-03-13 20:46:46 +0000 UTC" firstStartedPulling="2026-03-13 20:46:48.274544007 +0000 UTC m=+1168.290626410" lastFinishedPulling="2026-03-13 20:47:20.153648529 +0000 UTC m=+1200.169730932" observedRunningTime="2026-03-13 20:47:55.750399177 +0000 UTC m=+1235.766481600" watchObservedRunningTime="2026-03-13 20:47:55.753968874 +0000 UTC m=+1235.770051277" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.884158 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xvrv7-config-wq4h5"] Mar 13 20:47:55 crc kubenswrapper[5029]: E0313 20:47:55.884542 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cceec7e-5b5a-45b4-8480-9f44ce88107a" containerName="mariadb-account-create-update" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.884562 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cceec7e-5b5a-45b4-8480-9f44ce88107a" containerName="mariadb-account-create-update" Mar 13 20:47:55 crc kubenswrapper[5029]: E0313 20:47:55.884586 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" containerName="swift-ring-rebalance" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.884593 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" containerName="swift-ring-rebalance" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.884750 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6" containerName="swift-ring-rebalance" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.884766 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cceec7e-5b5a-45b4-8480-9f44ce88107a" containerName="mariadb-account-create-update" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.885298 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.891749 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 13 20:47:55 crc kubenswrapper[5029]: I0313 20:47:55.903009 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xvrv7-config-wq4h5"] Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.010315 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.010394 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-log-ovn\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.010452 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-scripts\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.010513 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run-ovn\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.010568 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7gtl\" (UniqueName: \"kubernetes.io/projected/6be697ee-11dd-4d68-8085-ab298ad1d923-kube-api-access-j7gtl\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.010595 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-additional-scripts\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.112453 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-additional-scripts\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.112541 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.112597 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-log-ovn\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.112641 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-scripts\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.112691 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run-ovn\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.112728 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7gtl\" (UniqueName: \"kubernetes.io/projected/6be697ee-11dd-4d68-8085-ab298ad1d923-kube-api-access-j7gtl\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.113361 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.113417 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-log-ovn\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.113576 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-additional-scripts\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.113677 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run-ovn\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.115293 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-scripts\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.140558 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7gtl\" (UniqueName: \"kubernetes.io/projected/6be697ee-11dd-4d68-8085-ab298ad1d923-kube-api-access-j7gtl\") pod \"ovn-controller-xvrv7-config-wq4h5\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.199647 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.639389 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xvrv7-config-wq4h5"] Mar 13 20:47:56 crc kubenswrapper[5029]: W0313 20:47:56.642251 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6be697ee_11dd_4d68_8085_ab298ad1d923.slice/crio-c7b4c2fce06687f8f89a909dafd1cff5c7183f05b772e22314561f428a6e5b27 WatchSource:0}: Error finding container c7b4c2fce06687f8f89a909dafd1cff5c7183f05b772e22314561f428a6e5b27: Status 404 returned error can't find the container with id c7b4c2fce06687f8f89a909dafd1cff5c7183f05b772e22314561f428a6e5b27 Mar 13 20:47:56 crc kubenswrapper[5029]: I0313 20:47:56.699555 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xvrv7-config-wq4h5" event={"ID":"6be697ee-11dd-4d68-8085-ab298ad1d923","Type":"ContainerStarted","Data":"c7b4c2fce06687f8f89a909dafd1cff5c7183f05b772e22314561f428a6e5b27"} Mar 13 20:47:57 crc kubenswrapper[5029]: I0313 20:47:57.708306 5029 generic.go:334] "Generic (PLEG): container finished" podID="6be697ee-11dd-4d68-8085-ab298ad1d923" containerID="f218aa80a96b66b0dcb404ade51868980754399c43b9cc4032a0c012366d07d8" exitCode=0 Mar 13 20:47:57 crc kubenswrapper[5029]: I0313 20:47:57.708374 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xvrv7-config-wq4h5" event={"ID":"6be697ee-11dd-4d68-8085-ab298ad1d923","Type":"ContainerDied","Data":"f218aa80a96b66b0dcb404ade51868980754399c43b9cc4032a0c012366d07d8"} Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.039409 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.189532 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-scripts\") pod \"6be697ee-11dd-4d68-8085-ab298ad1d923\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.189673 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-log-ovn\") pod \"6be697ee-11dd-4d68-8085-ab298ad1d923\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.189714 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7gtl\" (UniqueName: \"kubernetes.io/projected/6be697ee-11dd-4d68-8085-ab298ad1d923-kube-api-access-j7gtl\") pod \"6be697ee-11dd-4d68-8085-ab298ad1d923\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.189729 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6be697ee-11dd-4d68-8085-ab298ad1d923" (UID: "6be697ee-11dd-4d68-8085-ab298ad1d923"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.189793 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-additional-scripts\") pod \"6be697ee-11dd-4d68-8085-ab298ad1d923\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.189817 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run\") pod \"6be697ee-11dd-4d68-8085-ab298ad1d923\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.189878 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run-ovn\") pod \"6be697ee-11dd-4d68-8085-ab298ad1d923\" (UID: \"6be697ee-11dd-4d68-8085-ab298ad1d923\") " Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.189907 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run" (OuterVolumeSpecName: "var-run") pod "6be697ee-11dd-4d68-8085-ab298ad1d923" (UID: "6be697ee-11dd-4d68-8085-ab298ad1d923"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.190013 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6be697ee-11dd-4d68-8085-ab298ad1d923" (UID: "6be697ee-11dd-4d68-8085-ab298ad1d923"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.190518 5029 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.190544 5029 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.190555 5029 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6be697ee-11dd-4d68-8085-ab298ad1d923-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.190611 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6be697ee-11dd-4d68-8085-ab298ad1d923" (UID: "6be697ee-11dd-4d68-8085-ab298ad1d923"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.190895 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-scripts" (OuterVolumeSpecName: "scripts") pod "6be697ee-11dd-4d68-8085-ab298ad1d923" (UID: "6be697ee-11dd-4d68-8085-ab298ad1d923"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.198010 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be697ee-11dd-4d68-8085-ab298ad1d923-kube-api-access-j7gtl" (OuterVolumeSpecName: "kube-api-access-j7gtl") pod "6be697ee-11dd-4d68-8085-ab298ad1d923" (UID: "6be697ee-11dd-4d68-8085-ab298ad1d923"). InnerVolumeSpecName "kube-api-access-j7gtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.292295 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7gtl\" (UniqueName: \"kubernetes.io/projected/6be697ee-11dd-4d68-8085-ab298ad1d923-kube-api-access-j7gtl\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.292324 5029 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.292333 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6be697ee-11dd-4d68-8085-ab298ad1d923-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.723495 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xvrv7-config-wq4h5" event={"ID":"6be697ee-11dd-4d68-8085-ab298ad1d923","Type":"ContainerDied","Data":"c7b4c2fce06687f8f89a909dafd1cff5c7183f05b772e22314561f428a6e5b27"} Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.723537 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b4c2fce06687f8f89a909dafd1cff5c7183f05b772e22314561f428a6e5b27" Mar 13 20:47:59 crc kubenswrapper[5029]: I0313 20:47:59.723588 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xvrv7-config-wq4h5" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.137417 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557248-5d2z7"] Mar 13 20:48:00 crc kubenswrapper[5029]: E0313 20:48:00.137896 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be697ee-11dd-4d68-8085-ab298ad1d923" containerName="ovn-config" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.137910 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be697ee-11dd-4d68-8085-ab298ad1d923" containerName="ovn-config" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.138055 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be697ee-11dd-4d68-8085-ab298ad1d923" containerName="ovn-config" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.138566 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557248-5d2z7" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.140673 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.140959 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.140986 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.167071 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557248-5d2z7"] Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.192415 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xvrv7-config-wq4h5"] Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.213865 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqpn\" (UniqueName: \"kubernetes.io/projected/508d75bf-4e85-49d1-b942-ebd7d8a63e51-kube-api-access-8pqpn\") pod \"auto-csr-approver-29557248-5d2z7\" (UID: \"508d75bf-4e85-49d1-b942-ebd7d8a63e51\") " pod="openshift-infra/auto-csr-approver-29557248-5d2z7" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.219301 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xvrv7-config-wq4h5"] Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.315118 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pqpn\" (UniqueName: \"kubernetes.io/projected/508d75bf-4e85-49d1-b942-ebd7d8a63e51-kube-api-access-8pqpn\") pod \"auto-csr-approver-29557248-5d2z7\" (UID: \"508d75bf-4e85-49d1-b942-ebd7d8a63e51\") " pod="openshift-infra/auto-csr-approver-29557248-5d2z7" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.331923 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pqpn\" (UniqueName: \"kubernetes.io/projected/508d75bf-4e85-49d1-b942-ebd7d8a63e51-kube-api-access-8pqpn\") pod \"auto-csr-approver-29557248-5d2z7\" (UID: \"508d75bf-4e85-49d1-b942-ebd7d8a63e51\") " pod="openshift-infra/auto-csr-approver-29557248-5d2z7" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.454563 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557248-5d2z7" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.546176 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xvrv7" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.650351 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be697ee-11dd-4d68-8085-ab298ad1d923" path="/var/lib/kubelet/pods/6be697ee-11dd-4d68-8085-ab298ad1d923/volumes" Mar 13 20:48:00 crc kubenswrapper[5029]: I0313 20:48:00.910032 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557248-5d2z7"] Mar 13 20:48:00 crc kubenswrapper[5029]: W0313 20:48:00.918392 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod508d75bf_4e85_49d1_b942_ebd7d8a63e51.slice/crio-f5a71c77a9e4f16568eb0ecc67d299ff4bb91a31a5e3617395f8e0605540b67b WatchSource:0}: Error finding container f5a71c77a9e4f16568eb0ecc67d299ff4bb91a31a5e3617395f8e0605540b67b: Status 404 returned error can't find the container with id f5a71c77a9e4f16568eb0ecc67d299ff4bb91a31a5e3617395f8e0605540b67b Mar 13 20:48:01 crc kubenswrapper[5029]: I0313 20:48:01.741404 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557248-5d2z7" event={"ID":"508d75bf-4e85-49d1-b942-ebd7d8a63e51","Type":"ContainerStarted","Data":"f5a71c77a9e4f16568eb0ecc67d299ff4bb91a31a5e3617395f8e0605540b67b"} Mar 13 20:48:03 crc kubenswrapper[5029]: I0313 20:48:03.758503 5029 generic.go:334] "Generic (PLEG): container finished" podID="508d75bf-4e85-49d1-b942-ebd7d8a63e51" containerID="bf9166e5850834b7827d2f2cca353355cad3585d5a41b5d7498ef6b88e12fce9" exitCode=0 Mar 13 20:48:03 crc kubenswrapper[5029]: I0313 20:48:03.758623 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557248-5d2z7" event={"ID":"508d75bf-4e85-49d1-b942-ebd7d8a63e51","Type":"ContainerDied","Data":"bf9166e5850834b7827d2f2cca353355cad3585d5a41b5d7498ef6b88e12fce9"} Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.079690 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557248-5d2z7" Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.204289 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pqpn\" (UniqueName: \"kubernetes.io/projected/508d75bf-4e85-49d1-b942-ebd7d8a63e51-kube-api-access-8pqpn\") pod \"508d75bf-4e85-49d1-b942-ebd7d8a63e51\" (UID: \"508d75bf-4e85-49d1-b942-ebd7d8a63e51\") " Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.210483 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/508d75bf-4e85-49d1-b942-ebd7d8a63e51-kube-api-access-8pqpn" (OuterVolumeSpecName: "kube-api-access-8pqpn") pod "508d75bf-4e85-49d1-b942-ebd7d8a63e51" (UID: "508d75bf-4e85-49d1-b942-ebd7d8a63e51"). InnerVolumeSpecName "kube-api-access-8pqpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.306429 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pqpn\" (UniqueName: \"kubernetes.io/projected/508d75bf-4e85-49d1-b942-ebd7d8a63e51-kube-api-access-8pqpn\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.775291 5029 generic.go:334] "Generic (PLEG): container finished" podID="f3fdc768-348b-4581-a918-a009351efeee" containerID="e854972751227e9daeb84215df7b13d62235c8bb747460fc4e9973d069bd1198" exitCode=0 Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.775392 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d4mwd" event={"ID":"f3fdc768-348b-4581-a918-a009351efeee","Type":"ContainerDied","Data":"e854972751227e9daeb84215df7b13d62235c8bb747460fc4e9973d069bd1198"} Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.777463 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557248-5d2z7" event={"ID":"508d75bf-4e85-49d1-b942-ebd7d8a63e51","Type":"ContainerDied","Data":"f5a71c77a9e4f16568eb0ecc67d299ff4bb91a31a5e3617395f8e0605540b67b"} Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.777489 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5a71c77a9e4f16568eb0ecc67d299ff4bb91a31a5e3617395f8e0605540b67b" Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.777512 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557248-5d2z7" Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.838086 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.843143 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81a1e5be-bbdf-4a80-a209-3acb956f5c86-etc-swift\") pod \"swift-storage-0\" (UID: \"81a1e5be-bbdf-4a80-a209-3acb956f5c86\") " pod="openstack/swift-storage-0" Mar 13 20:48:05 crc kubenswrapper[5029]: I0313 20:48:05.843350 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 13 20:48:06 crc kubenswrapper[5029]: I0313 20:48:06.160140 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557242-hl5b9"] Mar 13 20:48:06 crc kubenswrapper[5029]: I0313 20:48:06.166788 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557242-hl5b9"] Mar 13 20:48:06 crc kubenswrapper[5029]: I0313 20:48:06.377436 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 13 20:48:06 crc kubenswrapper[5029]: W0313 20:48:06.378990 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81a1e5be_bbdf_4a80_a209_3acb956f5c86.slice/crio-10ac412ef84698120fbe6a508114db022d987d4fdd16145f7acd76a8e00bf9fb WatchSource:0}: Error finding container 10ac412ef84698120fbe6a508114db022d987d4fdd16145f7acd76a8e00bf9fb: Status 404 returned error can't find the container with id 10ac412ef84698120fbe6a508114db022d987d4fdd16145f7acd76a8e00bf9fb Mar 13 20:48:06 crc kubenswrapper[5029]: I0313 20:48:06.608638 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231cc164-04f6-42e0-ad5e-6b30fb9dbba3" path="/var/lib/kubelet/pods/231cc164-04f6-42e0-ad5e-6b30fb9dbba3/volumes" Mar 13 20:48:06 crc kubenswrapper[5029]: I0313 20:48:06.784694 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"10ac412ef84698120fbe6a508114db022d987d4fdd16145f7acd76a8e00bf9fb"} Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.025058 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.218660 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d4mwd" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.367782 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-config-data\") pod \"f3fdc768-348b-4581-a918-a009351efeee\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.367837 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6xls\" (UniqueName: \"kubernetes.io/projected/f3fdc768-348b-4581-a918-a009351efeee-kube-api-access-c6xls\") pod \"f3fdc768-348b-4581-a918-a009351efeee\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.367941 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-db-sync-config-data\") pod \"f3fdc768-348b-4581-a918-a009351efeee\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.367967 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-combined-ca-bundle\") pod \"f3fdc768-348b-4581-a918-a009351efeee\" (UID: \"f3fdc768-348b-4581-a918-a009351efeee\") " Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.374476 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fdc768-348b-4581-a918-a009351efeee-kube-api-access-c6xls" (OuterVolumeSpecName: "kube-api-access-c6xls") pod "f3fdc768-348b-4581-a918-a009351efeee" (UID: "f3fdc768-348b-4581-a918-a009351efeee"). InnerVolumeSpecName "kube-api-access-c6xls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.379579 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f3fdc768-348b-4581-a918-a009351efeee" (UID: "f3fdc768-348b-4581-a918-a009351efeee"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.398060 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3fdc768-348b-4581-a918-a009351efeee" (UID: "f3fdc768-348b-4581-a918-a009351efeee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.400069 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.442927 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-config-data" (OuterVolumeSpecName: "config-data") pod "f3fdc768-348b-4581-a918-a009351efeee" (UID: "f3fdc768-348b-4581-a918-a009351efeee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.476524 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.476833 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6xls\" (UniqueName: \"kubernetes.io/projected/f3fdc768-348b-4581-a918-a009351efeee-kube-api-access-c6xls\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.476860 5029 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.476870 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fdc768-348b-4581-a918-a009351efeee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.821560 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d4mwd" event={"ID":"f3fdc768-348b-4581-a918-a009351efeee","Type":"ContainerDied","Data":"983c5aea2df6abc1270d34f867062fec996478072553539147237d902d59eb90"} Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.821618 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983c5aea2df6abc1270d34f867062fec996478072553539147237d902d59eb90" Mar 13 20:48:07 crc kubenswrapper[5029]: I0313 20:48:07.821632 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d4mwd" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.268365 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-82hr9"] Mar 13 20:48:08 crc kubenswrapper[5029]: E0313 20:48:08.269021 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fdc768-348b-4581-a918-a009351efeee" containerName="glance-db-sync" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.269047 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fdc768-348b-4581-a918-a009351efeee" containerName="glance-db-sync" Mar 13 20:48:08 crc kubenswrapper[5029]: E0313 20:48:08.269059 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508d75bf-4e85-49d1-b942-ebd7d8a63e51" containerName="oc" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.269065 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="508d75bf-4e85-49d1-b942-ebd7d8a63e51" containerName="oc" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.269228 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="508d75bf-4e85-49d1-b942-ebd7d8a63e51" containerName="oc" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.269241 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fdc768-348b-4581-a918-a009351efeee" containerName="glance-db-sync" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.270083 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.300743 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-82hr9"] Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.392815 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.392920 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrqj\" (UniqueName: \"kubernetes.io/projected/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-kube-api-access-trrqj\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.392974 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.393001 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-config\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.393044 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.721378 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.721484 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrqj\" (UniqueName: \"kubernetes.io/projected/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-kube-api-access-trrqj\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.721737 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.721774 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-config\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.721832 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.748059 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.754666 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.755275 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-config\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.757884 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.758092 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrqj\" (UniqueName: \"kubernetes.io/projected/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-kube-api-access-trrqj\") pod \"dnsmasq-dns-5b946c75cc-82hr9\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.836721 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"949072bba477a28912b65b9b3a87fb3de39359163b08d2c70ffecf812c52eb70"} Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.836773 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"b3862d1b7ddf817f23e8de8b4d2cf1294da4d2c35ac4039882d4492d80b450b7"} Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.836798 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"d1ddbfae9cb3e10fa964d629a8943927431afab1eea60e3c54fa93901be9c387"} Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.861795 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 20:48:08 crc kubenswrapper[5029]: I0313 20:48:08.887513 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:09 crc kubenswrapper[5029]: I0313 20:48:09.845806 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"df86a4f3a15f7007b700abc5c87cdd26af5c0b87d6dc20dd8586f7ada5e46872"} Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.095811 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dlcdx"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.097425 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.106788 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-fjnrj"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.107958 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.129000 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-44kzh"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.130365 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.137353 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.137498 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.137592 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.139878 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qpzzs" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.144209 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-strtq"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.145609 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-strtq" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.154505 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7762-account-create-update-xzjrj"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.156273 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.158796 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.164147 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dlcdx"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.173942 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-82hr9"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.188017 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-44kzh"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.196240 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-fjnrj"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.231462 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ghqf7"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.232548 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.247366 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7762-account-create-update-xzjrj"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248489 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-operator-scripts\") pod \"cinder-db-create-dlcdx\" (UID: \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\") " pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248531 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-config-data\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248607 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nl9t\" (UniqueName: \"kubernetes.io/projected/e9745dc7-7db8-47fd-9e70-4e88a4652c52-kube-api-access-6nl9t\") pod \"neutron-db-create-strtq\" (UID: \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\") " pod="openstack/neutron-db-create-strtq" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248637 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7x4\" (UniqueName: \"kubernetes.io/projected/73fbf5bd-1541-450a-be13-daf65ce110ac-kube-api-access-lr7x4\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248679 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptmwt\" (UniqueName: \"kubernetes.io/projected/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-kube-api-access-ptmwt\") pod \"manila-db-create-fjnrj\" (UID: \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\") " pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248697 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9745dc7-7db8-47fd-9e70-4e88a4652c52-operator-scripts\") pod \"neutron-db-create-strtq\" (UID: \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\") " pod="openstack/neutron-db-create-strtq" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248717 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-combined-ca-bundle\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248742 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-operator-scripts\") pod \"manila-db-create-fjnrj\" (UID: \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\") " pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.248770 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8q24\" (UniqueName: \"kubernetes.io/projected/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-kube-api-access-n8q24\") pod \"cinder-db-create-dlcdx\" (UID: \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\") " pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.293318 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-strtq"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.339984 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b0fa-account-create-update-jg4lr"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.347108 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.351374 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.357406 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nl9t\" (UniqueName: \"kubernetes.io/projected/e9745dc7-7db8-47fd-9e70-4e88a4652c52-kube-api-access-6nl9t\") pod \"neutron-db-create-strtq\" (UID: \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\") " pod="openstack/neutron-db-create-strtq" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.357611 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d8hv\" (UniqueName: \"kubernetes.io/projected/1da3ca91-2523-4bbc-9ee8-5957e040e522-kube-api-access-4d8hv\") pod \"cinder-7762-account-create-update-xzjrj\" (UID: \"1da3ca91-2523-4bbc-9ee8-5957e040e522\") " pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.357700 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr7x4\" (UniqueName: \"kubernetes.io/projected/73fbf5bd-1541-450a-be13-daf65ce110ac-kube-api-access-lr7x4\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.357794 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptmwt\" (UniqueName: \"kubernetes.io/projected/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-kube-api-access-ptmwt\") pod \"manila-db-create-fjnrj\" (UID: \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\") " pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.357971 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9745dc7-7db8-47fd-9e70-4e88a4652c52-operator-scripts\") pod \"neutron-db-create-strtq\" (UID: \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\") " pod="openstack/neutron-db-create-strtq" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.358081 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-combined-ca-bundle\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.358207 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-operator-scripts\") pod \"manila-db-create-fjnrj\" (UID: \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\") " pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.358316 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3ca91-2523-4bbc-9ee8-5957e040e522-operator-scripts\") pod \"cinder-7762-account-create-update-xzjrj\" (UID: \"1da3ca91-2523-4bbc-9ee8-5957e040e522\") " pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.364317 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9745dc7-7db8-47fd-9e70-4e88a4652c52-operator-scripts\") pod \"neutron-db-create-strtq\" (UID: \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\") " pod="openstack/neutron-db-create-strtq" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.364447 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8q24\" (UniqueName: \"kubernetes.io/projected/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-kube-api-access-n8q24\") pod \"cinder-db-create-dlcdx\" (UID: \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\") " pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.364506 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4584e142-8670-4bae-a757-fcbe7cb3e614-operator-scripts\") pod \"barbican-db-create-ghqf7\" (UID: \"4584e142-8670-4bae-a757-fcbe7cb3e614\") " pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.364566 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-operator-scripts\") pod \"cinder-db-create-dlcdx\" (UID: \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\") " pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.364613 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-config-data\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.364694 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l866w\" (UniqueName: \"kubernetes.io/projected/4584e142-8670-4bae-a757-fcbe7cb3e614-kube-api-access-l866w\") pod \"barbican-db-create-ghqf7\" (UID: \"4584e142-8670-4bae-a757-fcbe7cb3e614\") " pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.367525 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-operator-scripts\") pod \"cinder-db-create-dlcdx\" (UID: \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\") " pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.371318 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-combined-ca-bundle\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.372337 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-operator-scripts\") pod \"manila-db-create-fjnrj\" (UID: \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\") " pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.383330 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-config-data\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.398251 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ghqf7"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.399949 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr7x4\" (UniqueName: \"kubernetes.io/projected/73fbf5bd-1541-450a-be13-daf65ce110ac-kube-api-access-lr7x4\") pod \"keystone-db-sync-44kzh\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.400889 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptmwt\" (UniqueName: \"kubernetes.io/projected/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-kube-api-access-ptmwt\") pod \"manila-db-create-fjnrj\" (UID: \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\") " pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.410428 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b0fa-account-create-update-jg4lr"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.413899 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nl9t\" (UniqueName: \"kubernetes.io/projected/e9745dc7-7db8-47fd-9e70-4e88a4652c52-kube-api-access-6nl9t\") pod \"neutron-db-create-strtq\" (UID: \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\") " pod="openstack/neutron-db-create-strtq" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.423267 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8q24\" (UniqueName: \"kubernetes.io/projected/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-kube-api-access-n8q24\") pod \"cinder-db-create-dlcdx\" (UID: \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\") " pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.426593 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.426635 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-3128-account-create-update-4n9hw"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.428035 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.431459 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.451350 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.456622 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3128-account-create-update-4n9hw"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.466587 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl4ln\" (UniqueName: \"kubernetes.io/projected/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-kube-api-access-sl4ln\") pod \"manila-3128-account-create-update-4n9hw\" (UID: \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\") " pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.466653 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70a7564-5a51-4289-8f7f-22c3258a649a-operator-scripts\") pod \"neutron-b0fa-account-create-update-jg4lr\" (UID: \"e70a7564-5a51-4289-8f7f-22c3258a649a\") " pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.466704 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3ca91-2523-4bbc-9ee8-5957e040e522-operator-scripts\") pod \"cinder-7762-account-create-update-xzjrj\" (UID: \"1da3ca91-2523-4bbc-9ee8-5957e040e522\") " pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.466736 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-operator-scripts\") pod \"manila-3128-account-create-update-4n9hw\" (UID: \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\") " pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.466761 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4584e142-8670-4bae-a757-fcbe7cb3e614-operator-scripts\") pod \"barbican-db-create-ghqf7\" (UID: \"4584e142-8670-4bae-a757-fcbe7cb3e614\") " pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.466786 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8tr\" (UniqueName: \"kubernetes.io/projected/e70a7564-5a51-4289-8f7f-22c3258a649a-kube-api-access-kn8tr\") pod \"neutron-b0fa-account-create-update-jg4lr\" (UID: \"e70a7564-5a51-4289-8f7f-22c3258a649a\") " pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.466896 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l866w\" (UniqueName: \"kubernetes.io/projected/4584e142-8670-4bae-a757-fcbe7cb3e614-kube-api-access-l866w\") pod \"barbican-db-create-ghqf7\" (UID: \"4584e142-8670-4bae-a757-fcbe7cb3e614\") " pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.466958 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d8hv\" (UniqueName: \"kubernetes.io/projected/1da3ca91-2523-4bbc-9ee8-5957e040e522-kube-api-access-4d8hv\") pod \"cinder-7762-account-create-update-xzjrj\" (UID: \"1da3ca91-2523-4bbc-9ee8-5957e040e522\") " pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.468397 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4584e142-8670-4bae-a757-fcbe7cb3e614-operator-scripts\") pod \"barbican-db-create-ghqf7\" (UID: \"4584e142-8670-4bae-a757-fcbe7cb3e614\") " pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.469360 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3ca91-2523-4bbc-9ee8-5957e040e522-operator-scripts\") pod \"cinder-7762-account-create-update-xzjrj\" (UID: \"1da3ca91-2523-4bbc-9ee8-5957e040e522\") " pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.490123 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l866w\" (UniqueName: \"kubernetes.io/projected/4584e142-8670-4bae-a757-fcbe7cb3e614-kube-api-access-l866w\") pod \"barbican-db-create-ghqf7\" (UID: \"4584e142-8670-4bae-a757-fcbe7cb3e614\") " pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.494661 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d8hv\" (UniqueName: \"kubernetes.io/projected/1da3ca91-2523-4bbc-9ee8-5957e040e522-kube-api-access-4d8hv\") pod \"cinder-7762-account-create-update-xzjrj\" (UID: \"1da3ca91-2523-4bbc-9ee8-5957e040e522\") " pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.568003 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl4ln\" (UniqueName: \"kubernetes.io/projected/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-kube-api-access-sl4ln\") pod \"manila-3128-account-create-update-4n9hw\" (UID: \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\") " pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.568064 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70a7564-5a51-4289-8f7f-22c3258a649a-operator-scripts\") pod \"neutron-b0fa-account-create-update-jg4lr\" (UID: \"e70a7564-5a51-4289-8f7f-22c3258a649a\") " pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.568099 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-operator-scripts\") pod \"manila-3128-account-create-update-4n9hw\" (UID: \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\") " pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.568122 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8tr\" (UniqueName: \"kubernetes.io/projected/e70a7564-5a51-4289-8f7f-22c3258a649a-kube-api-access-kn8tr\") pod \"neutron-b0fa-account-create-update-jg4lr\" (UID: \"e70a7564-5a51-4289-8f7f-22c3258a649a\") " pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.569425 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70a7564-5a51-4289-8f7f-22c3258a649a-operator-scripts\") pod \"neutron-b0fa-account-create-update-jg4lr\" (UID: \"e70a7564-5a51-4289-8f7f-22c3258a649a\") " pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.569927 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-operator-scripts\") pod \"manila-3128-account-create-update-4n9hw\" (UID: \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\") " pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.621576 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8tr\" (UniqueName: \"kubernetes.io/projected/e70a7564-5a51-4289-8f7f-22c3258a649a-kube-api-access-kn8tr\") pod \"neutron-b0fa-account-create-update-jg4lr\" (UID: \"e70a7564-5a51-4289-8f7f-22c3258a649a\") " pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.641790 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-strtq" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.644120 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl4ln\" (UniqueName: \"kubernetes.io/projected/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-kube-api-access-sl4ln\") pod \"manila-3128-account-create-update-4n9hw\" (UID: \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\") " pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.672655 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6239-account-create-update-sx4pg"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.676054 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.673843 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6239-account-create-update-sx4pg"] Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.686297 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.718707 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.719452 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.724927 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.738430 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.755721 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.776962 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jclh\" (UniqueName: \"kubernetes.io/projected/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-kube-api-access-2jclh\") pod \"barbican-6239-account-create-update-sx4pg\" (UID: \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\") " pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.777029 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-operator-scripts\") pod \"barbican-6239-account-create-update-sx4pg\" (UID: \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\") " pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.879224 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jclh\" (UniqueName: \"kubernetes.io/projected/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-kube-api-access-2jclh\") pod \"barbican-6239-account-create-update-sx4pg\" (UID: \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\") " pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.879273 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-operator-scripts\") pod \"barbican-6239-account-create-update-sx4pg\" (UID: \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\") " pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.880179 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-operator-scripts\") pod \"barbican-6239-account-create-update-sx4pg\" (UID: \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\") " pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.891327 5029 generic.go:334] "Generic (PLEG): container finished" podID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerID="39e96f71e696d13c55a0f76116a7a6ec474171ee48ec48e7992466cc9c087790" exitCode=0 Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.891380 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" event={"ID":"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3","Type":"ContainerDied","Data":"39e96f71e696d13c55a0f76116a7a6ec474171ee48ec48e7992466cc9c087790"} Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.891408 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" event={"ID":"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3","Type":"ContainerStarted","Data":"5bd57658c22d7895283a718211ebf118f982a4dd5279271d27a371e5221b86f0"} Mar 13 20:48:10 crc kubenswrapper[5029]: I0313 20:48:10.901921 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jclh\" (UniqueName: \"kubernetes.io/projected/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-kube-api-access-2jclh\") pod \"barbican-6239-account-create-update-sx4pg\" (UID: \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\") " pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.017037 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.062913 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-fjnrj"] Mar 13 20:48:11 crc kubenswrapper[5029]: W0313 20:48:11.117984 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeba61da6_5905_40c4_bdf7_dcd9b5e622f1.slice/crio-978f3908e0d7636e564a10705a4f8d0f2f10d788fc28d173fd9ce9bd861497e5 WatchSource:0}: Error finding container 978f3908e0d7636e564a10705a4f8d0f2f10d788fc28d173fd9ce9bd861497e5: Status 404 returned error can't find the container with id 978f3908e0d7636e564a10705a4f8d0f2f10d788fc28d173fd9ce9bd861497e5 Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.260810 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-44kzh"] Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.345482 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-strtq"] Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.548651 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7762-account-create-update-xzjrj"] Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.563384 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3128-account-create-update-4n9hw"] Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.586960 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ghqf7"] Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.599075 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dlcdx"] Mar 13 20:48:11 crc kubenswrapper[5029]: W0313 20:48:11.684702 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9745dc7_7db8_47fd_9e70_4e88a4652c52.slice/crio-87fc10838d38712abc0700110a168cc58474929f64ce758de00a081c73ad858e WatchSource:0}: Error finding container 87fc10838d38712abc0700110a168cc58474929f64ce758de00a081c73ad858e: Status 404 returned error can't find the container with id 87fc10838d38712abc0700110a168cc58474929f64ce758de00a081c73ad858e Mar 13 20:48:11 crc kubenswrapper[5029]: W0313 20:48:11.686825 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73fbf5bd_1541_450a_be13_daf65ce110ac.slice/crio-3965d053b896861703c450b2cc8adc402fbb06f7b2156161fbaf67e9c7f0f025 WatchSource:0}: Error finding container 3965d053b896861703c450b2cc8adc402fbb06f7b2156161fbaf67e9c7f0f025: Status 404 returned error can't find the container with id 3965d053b896861703c450b2cc8adc402fbb06f7b2156161fbaf67e9c7f0f025 Mar 13 20:48:11 crc kubenswrapper[5029]: W0313 20:48:11.691905 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1902f70_49a1_454a_8f7c_e90e2aa9c8ea.slice/crio-b2bcd28e2d62d078196250a2ad8fa2c0822379b9dff00180a2f0063666fee786 WatchSource:0}: Error finding container b2bcd28e2d62d078196250a2ad8fa2c0822379b9dff00180a2f0063666fee786: Status 404 returned error can't find the container with id b2bcd28e2d62d078196250a2ad8fa2c0822379b9dff00180a2f0063666fee786 Mar 13 20:48:11 crc kubenswrapper[5029]: W0313 20:48:11.692391 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da3ca91_2523_4bbc_9ee8_5957e040e522.slice/crio-ff01d4f30675a2b710c7c0e77af5a6bc8ce605b54c9012215a350a09b7f3220b WatchSource:0}: Error finding container ff01d4f30675a2b710c7c0e77af5a6bc8ce605b54c9012215a350a09b7f3220b: Status 404 returned error can't find the container with id ff01d4f30675a2b710c7c0e77af5a6bc8ce605b54c9012215a350a09b7f3220b Mar 13 20:48:11 crc kubenswrapper[5029]: W0313 20:48:11.692768 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode22e682c_f9d6_4ef0_a8ad_b87aea2ef852.slice/crio-443d35c89a823db8d38f29c58022ba147b52dd0a366a0c1600f57a1fedaf0ffc WatchSource:0}: Error finding container 443d35c89a823db8d38f29c58022ba147b52dd0a366a0c1600f57a1fedaf0ffc: Status 404 returned error can't find the container with id 443d35c89a823db8d38f29c58022ba147b52dd0a366a0c1600f57a1fedaf0ffc Mar 13 20:48:11 crc kubenswrapper[5029]: W0313 20:48:11.695014 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4584e142_8670_4bae_a757_fcbe7cb3e614.slice/crio-33f4aa039e1a23a740faec1516b58cb1dadd2140cbe9f14da8162f670fdd3598 WatchSource:0}: Error finding container 33f4aa039e1a23a740faec1516b58cb1dadd2140cbe9f14da8162f670fdd3598: Status 404 returned error can't find the container with id 33f4aa039e1a23a740faec1516b58cb1dadd2140cbe9f14da8162f670fdd3598 Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.729529 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b0fa-account-create-update-jg4lr"] Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.766693 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6239-account-create-update-sx4pg"] Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.967282 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" event={"ID":"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3","Type":"ContainerStarted","Data":"507e8ceb54d87635e7785755378901449576508413f6afae8beabd95ed4fe085"} Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.978111 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.985231 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ghqf7" event={"ID":"4584e142-8670-4bae-a757-fcbe7cb3e614","Type":"ContainerStarted","Data":"33f4aa039e1a23a740faec1516b58cb1dadd2140cbe9f14da8162f670fdd3598"} Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.990654 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6239-account-create-update-sx4pg" event={"ID":"6cc0a76d-907f-4dd2-99be-8dcde78b34b6","Type":"ContainerStarted","Data":"ba1fbc5119d22f38e6ada8198745ea1247e3f96684c234282dcf30f6997fb097"} Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.995711 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0fa-account-create-update-jg4lr" event={"ID":"e70a7564-5a51-4289-8f7f-22c3258a649a","Type":"ContainerStarted","Data":"212008d65965f8d758fa5b78f36f5dfce5ef59303abd54afa332b1488bceadd0"} Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.998117 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44kzh" event={"ID":"73fbf5bd-1541-450a-be13-daf65ce110ac","Type":"ContainerStarted","Data":"3965d053b896861703c450b2cc8adc402fbb06f7b2156161fbaf67e9c7f0f025"} Mar 13 20:48:11 crc kubenswrapper[5029]: I0313 20:48:11.999441 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-strtq" event={"ID":"e9745dc7-7db8-47fd-9e70-4e88a4652c52","Type":"ContainerStarted","Data":"87fc10838d38712abc0700110a168cc58474929f64ce758de00a081c73ad858e"} Mar 13 20:48:12 crc kubenswrapper[5029]: I0313 20:48:12.001870 5029 generic.go:334] "Generic (PLEG): container finished" podID="eba61da6-5905-40c4-bdf7-dcd9b5e622f1" containerID="049aa96075ef0ac84de62704326a7851cea6d3373e67e731afff47bde62f8110" exitCode=0 Mar 13 20:48:12 crc kubenswrapper[5029]: I0313 20:48:12.001936 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-fjnrj" event={"ID":"eba61da6-5905-40c4-bdf7-dcd9b5e622f1","Type":"ContainerDied","Data":"049aa96075ef0ac84de62704326a7851cea6d3373e67e731afff47bde62f8110"} Mar 13 20:48:12 crc kubenswrapper[5029]: I0313 20:48:12.001963 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-fjnrj" event={"ID":"eba61da6-5905-40c4-bdf7-dcd9b5e622f1","Type":"ContainerStarted","Data":"978f3908e0d7636e564a10705a4f8d0f2f10d788fc28d173fd9ce9bd861497e5"} Mar 13 20:48:12 crc kubenswrapper[5029]: I0313 20:48:12.003712 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3128-account-create-update-4n9hw" event={"ID":"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea","Type":"ContainerStarted","Data":"b2bcd28e2d62d078196250a2ad8fa2c0822379b9dff00180a2f0063666fee786"} Mar 13 20:48:12 crc kubenswrapper[5029]: I0313 20:48:12.006748 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dlcdx" event={"ID":"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852","Type":"ContainerStarted","Data":"443d35c89a823db8d38f29c58022ba147b52dd0a366a0c1600f57a1fedaf0ffc"} Mar 13 20:48:12 crc kubenswrapper[5029]: I0313 20:48:12.009528 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7762-account-create-update-xzjrj" event={"ID":"1da3ca91-2523-4bbc-9ee8-5957e040e522","Type":"ContainerStarted","Data":"ff01d4f30675a2b710c7c0e77af5a6bc8ce605b54c9012215a350a09b7f3220b"} Mar 13 20:48:12 crc kubenswrapper[5029]: I0313 20:48:12.022422 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" podStartSLOduration=4.022400955 podStartE2EDuration="4.022400955s" podCreationTimestamp="2026-03-13 20:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:48:12.015021373 +0000 UTC m=+1252.031103776" watchObservedRunningTime="2026-03-13 20:48:12.022400955 +0000 UTC m=+1252.038483368" Mar 13 20:48:12 crc kubenswrapper[5029]: I0313 20:48:12.067835 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-strtq" podStartSLOduration=2.067814305 podStartE2EDuration="2.067814305s" podCreationTimestamp="2026-03-13 20:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:48:12.059843617 +0000 UTC m=+1252.075926030" watchObservedRunningTime="2026-03-13 20:48:12.067814305 +0000 UTC m=+1252.083896718" Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.020125 5029 generic.go:334] "Generic (PLEG): container finished" podID="6cc0a76d-907f-4dd2-99be-8dcde78b34b6" containerID="6aceb4f35c9aa0c2d2259e6a2de1e32d6ef1d23e6f0098d128d9ecd972bb6795" exitCode=0 Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.020694 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6239-account-create-update-sx4pg" event={"ID":"6cc0a76d-907f-4dd2-99be-8dcde78b34b6","Type":"ContainerDied","Data":"6aceb4f35c9aa0c2d2259e6a2de1e32d6ef1d23e6f0098d128d9ecd972bb6795"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.023169 5029 generic.go:334] "Generic (PLEG): container finished" podID="e70a7564-5a51-4289-8f7f-22c3258a649a" containerID="531b68d8da82b1abbefcb1f1bb823b6517d9721fdfab8292ec36bb4778d381fa" exitCode=0 Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.023227 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0fa-account-create-update-jg4lr" event={"ID":"e70a7564-5a51-4289-8f7f-22c3258a649a","Type":"ContainerDied","Data":"531b68d8da82b1abbefcb1f1bb823b6517d9721fdfab8292ec36bb4778d381fa"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.025452 5029 generic.go:334] "Generic (PLEG): container finished" podID="e1902f70-49a1-454a-8f7c-e90e2aa9c8ea" containerID="036c75a5a372348a1fb1e35e152bd9e5bda1fb5aae5f2275474af750989f85fb" exitCode=0 Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.025514 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3128-account-create-update-4n9hw" event={"ID":"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea","Type":"ContainerDied","Data":"036c75a5a372348a1fb1e35e152bd9e5bda1fb5aae5f2275474af750989f85fb"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.027343 5029 generic.go:334] "Generic (PLEG): container finished" podID="e22e682c-f9d6-4ef0-a8ad-b87aea2ef852" containerID="186836aa2d462544ae027ce0afa042a2aa8a312169a1ad0ac4d9b916501037f0" exitCode=0 Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.027450 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dlcdx" event={"ID":"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852","Type":"ContainerDied","Data":"186836aa2d462544ae027ce0afa042a2aa8a312169a1ad0ac4d9b916501037f0"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.029287 5029 generic.go:334] "Generic (PLEG): container finished" podID="1da3ca91-2523-4bbc-9ee8-5957e040e522" containerID="8c4f7eb4e692d283364545720f6f4377dbeb4858ae2908160a897ee0cc59f690" exitCode=0 Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.029334 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7762-account-create-update-xzjrj" event={"ID":"1da3ca91-2523-4bbc-9ee8-5957e040e522","Type":"ContainerDied","Data":"8c4f7eb4e692d283364545720f6f4377dbeb4858ae2908160a897ee0cc59f690"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.054242 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"9f7c7f6ae93b50a84bf2c4682ce8bcec72d8c36533144b095732eacd382c9b88"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.054287 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"3a93ddb7d2367389a048cf112d59c4d1444e3bae550f3dbef437b0c9f2d34ec4"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.061892 5029 generic.go:334] "Generic (PLEG): container finished" podID="4584e142-8670-4bae-a757-fcbe7cb3e614" containerID="f01a5be770269349a81fcfeef8e234871ddb85d02a87e24e8a345ef7d329a17b" exitCode=0 Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.062188 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ghqf7" event={"ID":"4584e142-8670-4bae-a757-fcbe7cb3e614","Type":"ContainerDied","Data":"f01a5be770269349a81fcfeef8e234871ddb85d02a87e24e8a345ef7d329a17b"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.078611 5029 generic.go:334] "Generic (PLEG): container finished" podID="e9745dc7-7db8-47fd-9e70-4e88a4652c52" containerID="2142b16c353fe61ad5f6e3ed36766a4bcd0cc2f5603879cba12d4d5612d8d264" exitCode=0 Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.078905 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-strtq" event={"ID":"e9745dc7-7db8-47fd-9e70-4e88a4652c52","Type":"ContainerDied","Data":"2142b16c353fe61ad5f6e3ed36766a4bcd0cc2f5603879cba12d4d5612d8d264"} Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.483798 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.566150 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-operator-scripts\") pod \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\" (UID: \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\") " Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.566457 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptmwt\" (UniqueName: \"kubernetes.io/projected/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-kube-api-access-ptmwt\") pod \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\" (UID: \"eba61da6-5905-40c4-bdf7-dcd9b5e622f1\") " Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.568326 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eba61da6-5905-40c4-bdf7-dcd9b5e622f1" (UID: "eba61da6-5905-40c4-bdf7-dcd9b5e622f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.583801 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-kube-api-access-ptmwt" (OuterVolumeSpecName: "kube-api-access-ptmwt") pod "eba61da6-5905-40c4-bdf7-dcd9b5e622f1" (UID: "eba61da6-5905-40c4-bdf7-dcd9b5e622f1"). InnerVolumeSpecName "kube-api-access-ptmwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.669256 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:13 crc kubenswrapper[5029]: I0313 20:48:13.669301 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptmwt\" (UniqueName: \"kubernetes.io/projected/eba61da6-5905-40c4-bdf7-dcd9b5e622f1-kube-api-access-ptmwt\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:14 crc kubenswrapper[5029]: I0313 20:48:14.091167 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"c7c42a46b65f63ff18066a6075be7270f374467b5a5c5ca9e4a1e16df50568a5"} Mar 13 20:48:14 crc kubenswrapper[5029]: I0313 20:48:14.091507 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"6b5903f9f89511b2da0ebc06e48278a1c921267a2df3dcc9069fd90f815de640"} Mar 13 20:48:14 crc kubenswrapper[5029]: I0313 20:48:14.093324 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-fjnrj" Mar 13 20:48:14 crc kubenswrapper[5029]: I0313 20:48:14.095091 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-fjnrj" event={"ID":"eba61da6-5905-40c4-bdf7-dcd9b5e622f1","Type":"ContainerDied","Data":"978f3908e0d7636e564a10705a4f8d0f2f10d788fc28d173fd9ce9bd861497e5"} Mar 13 20:48:14 crc kubenswrapper[5029]: I0313 20:48:14.095126 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="978f3908e0d7636e564a10705a4f8d0f2f10d788fc28d173fd9ce9bd861497e5" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.718997 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-strtq" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.725661 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.760622 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.767987 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.786578 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.795391 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.816623 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.848033 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3ca91-2523-4bbc-9ee8-5957e040e522-operator-scripts\") pod \"1da3ca91-2523-4bbc-9ee8-5957e040e522\" (UID: \"1da3ca91-2523-4bbc-9ee8-5957e040e522\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.848298 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jclh\" (UniqueName: \"kubernetes.io/projected/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-kube-api-access-2jclh\") pod \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\" (UID: \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.848453 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70a7564-5a51-4289-8f7f-22c3258a649a-operator-scripts\") pod \"e70a7564-5a51-4289-8f7f-22c3258a649a\" (UID: \"e70a7564-5a51-4289-8f7f-22c3258a649a\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.848683 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-operator-scripts\") pod \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\" (UID: \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.848979 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-operator-scripts\") pod \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\" (UID: \"6cc0a76d-907f-4dd2-99be-8dcde78b34b6\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.849132 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4584e142-8670-4bae-a757-fcbe7cb3e614-operator-scripts\") pod \"4584e142-8670-4bae-a757-fcbe7cb3e614\" (UID: \"4584e142-8670-4bae-a757-fcbe7cb3e614\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.849288 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l866w\" (UniqueName: \"kubernetes.io/projected/4584e142-8670-4bae-a757-fcbe7cb3e614-kube-api-access-l866w\") pod \"4584e142-8670-4bae-a757-fcbe7cb3e614\" (UID: \"4584e142-8670-4bae-a757-fcbe7cb3e614\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.849391 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8q24\" (UniqueName: \"kubernetes.io/projected/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-kube-api-access-n8q24\") pod \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\" (UID: \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.849494 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9745dc7-7db8-47fd-9e70-4e88a4652c52-operator-scripts\") pod \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\" (UID: \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.850092 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn8tr\" (UniqueName: \"kubernetes.io/projected/e70a7564-5a51-4289-8f7f-22c3258a649a-kube-api-access-kn8tr\") pod \"e70a7564-5a51-4289-8f7f-22c3258a649a\" (UID: \"e70a7564-5a51-4289-8f7f-22c3258a649a\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.851101 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4584e142-8670-4bae-a757-fcbe7cb3e614-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4584e142-8670-4bae-a757-fcbe7cb3e614" (UID: "4584e142-8670-4bae-a757-fcbe7cb3e614"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.851337 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cc0a76d-907f-4dd2-99be-8dcde78b34b6" (UID: "6cc0a76d-907f-4dd2-99be-8dcde78b34b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.851809 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1902f70-49a1-454a-8f7c-e90e2aa9c8ea" (UID: "e1902f70-49a1-454a-8f7c-e90e2aa9c8ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.851976 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl4ln\" (UniqueName: \"kubernetes.io/projected/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-kube-api-access-sl4ln\") pod \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\" (UID: \"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.852140 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-operator-scripts\") pod \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\" (UID: \"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.852306 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nl9t\" (UniqueName: \"kubernetes.io/projected/e9745dc7-7db8-47fd-9e70-4e88a4652c52-kube-api-access-6nl9t\") pod \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\" (UID: \"e9745dc7-7db8-47fd-9e70-4e88a4652c52\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.852453 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d8hv\" (UniqueName: \"kubernetes.io/projected/1da3ca91-2523-4bbc-9ee8-5957e040e522-kube-api-access-4d8hv\") pod \"1da3ca91-2523-4bbc-9ee8-5957e040e522\" (UID: \"1da3ca91-2523-4bbc-9ee8-5957e040e522\") " Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.853316 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.853483 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.853554 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e22e682c-f9d6-4ef0-a8ad-b87aea2ef852" (UID: "e22e682c-f9d6-4ef0-a8ad-b87aea2ef852"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.853568 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4584e142-8670-4bae-a757-fcbe7cb3e614-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.856370 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-kube-api-access-n8q24" (OuterVolumeSpecName: "kube-api-access-n8q24") pod "e22e682c-f9d6-4ef0-a8ad-b87aea2ef852" (UID: "e22e682c-f9d6-4ef0-a8ad-b87aea2ef852"). InnerVolumeSpecName "kube-api-access-n8q24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.857492 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da3ca91-2523-4bbc-9ee8-5957e040e522-kube-api-access-4d8hv" (OuterVolumeSpecName: "kube-api-access-4d8hv") pod "1da3ca91-2523-4bbc-9ee8-5957e040e522" (UID: "1da3ca91-2523-4bbc-9ee8-5957e040e522"). InnerVolumeSpecName "kube-api-access-4d8hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.859355 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70a7564-5a51-4289-8f7f-22c3258a649a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e70a7564-5a51-4289-8f7f-22c3258a649a" (UID: "e70a7564-5a51-4289-8f7f-22c3258a649a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.860233 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9745dc7-7db8-47fd-9e70-4e88a4652c52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9745dc7-7db8-47fd-9e70-4e88a4652c52" (UID: "e9745dc7-7db8-47fd-9e70-4e88a4652c52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.860407 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-kube-api-access-sl4ln" (OuterVolumeSpecName: "kube-api-access-sl4ln") pod "e1902f70-49a1-454a-8f7c-e90e2aa9c8ea" (UID: "e1902f70-49a1-454a-8f7c-e90e2aa9c8ea"). InnerVolumeSpecName "kube-api-access-sl4ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.861105 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4584e142-8670-4bae-a757-fcbe7cb3e614-kube-api-access-l866w" (OuterVolumeSpecName: "kube-api-access-l866w") pod "4584e142-8670-4bae-a757-fcbe7cb3e614" (UID: "4584e142-8670-4bae-a757-fcbe7cb3e614"). InnerVolumeSpecName "kube-api-access-l866w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.864972 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70a7564-5a51-4289-8f7f-22c3258a649a-kube-api-access-kn8tr" (OuterVolumeSpecName: "kube-api-access-kn8tr") pod "e70a7564-5a51-4289-8f7f-22c3258a649a" (UID: "e70a7564-5a51-4289-8f7f-22c3258a649a"). InnerVolumeSpecName "kube-api-access-kn8tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.871470 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9745dc7-7db8-47fd-9e70-4e88a4652c52-kube-api-access-6nl9t" (OuterVolumeSpecName: "kube-api-access-6nl9t") pod "e9745dc7-7db8-47fd-9e70-4e88a4652c52" (UID: "e9745dc7-7db8-47fd-9e70-4e88a4652c52"). InnerVolumeSpecName "kube-api-access-6nl9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.887158 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da3ca91-2523-4bbc-9ee8-5957e040e522-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1da3ca91-2523-4bbc-9ee8-5957e040e522" (UID: "1da3ca91-2523-4bbc-9ee8-5957e040e522"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.888255 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-kube-api-access-2jclh" (OuterVolumeSpecName: "kube-api-access-2jclh") pod "6cc0a76d-907f-4dd2-99be-8dcde78b34b6" (UID: "6cc0a76d-907f-4dd2-99be-8dcde78b34b6"). InnerVolumeSpecName "kube-api-access-2jclh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955509 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l866w\" (UniqueName: \"kubernetes.io/projected/4584e142-8670-4bae-a757-fcbe7cb3e614-kube-api-access-l866w\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955544 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8q24\" (UniqueName: \"kubernetes.io/projected/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-kube-api-access-n8q24\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955557 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9745dc7-7db8-47fd-9e70-4e88a4652c52-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955569 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn8tr\" (UniqueName: \"kubernetes.io/projected/e70a7564-5a51-4289-8f7f-22c3258a649a-kube-api-access-kn8tr\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955579 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl4ln\" (UniqueName: \"kubernetes.io/projected/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea-kube-api-access-sl4ln\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955591 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955603 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nl9t\" (UniqueName: \"kubernetes.io/projected/e9745dc7-7db8-47fd-9e70-4e88a4652c52-kube-api-access-6nl9t\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955614 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d8hv\" (UniqueName: \"kubernetes.io/projected/1da3ca91-2523-4bbc-9ee8-5957e040e522-kube-api-access-4d8hv\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955622 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3ca91-2523-4bbc-9ee8-5957e040e522-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955630 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jclh\" (UniqueName: \"kubernetes.io/projected/6cc0a76d-907f-4dd2-99be-8dcde78b34b6-kube-api-access-2jclh\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:17 crc kubenswrapper[5029]: I0313 20:48:17.955638 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70a7564-5a51-4289-8f7f-22c3258a649a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.129585 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7762-account-create-update-xzjrj" event={"ID":"1da3ca91-2523-4bbc-9ee8-5957e040e522","Type":"ContainerDied","Data":"ff01d4f30675a2b710c7c0e77af5a6bc8ce605b54c9012215a350a09b7f3220b"} Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.129798 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff01d4f30675a2b710c7c0e77af5a6bc8ce605b54c9012215a350a09b7f3220b" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.129663 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7762-account-create-update-xzjrj" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.131378 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44kzh" event={"ID":"73fbf5bd-1541-450a-be13-daf65ce110ac","Type":"ContainerStarted","Data":"cc61ac623e976451fdba84b055dbdfc2202ed91f2f114dd2ac339ec80e7dcc45"} Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.133094 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ghqf7" event={"ID":"4584e142-8670-4bae-a757-fcbe7cb3e614","Type":"ContainerDied","Data":"33f4aa039e1a23a740faec1516b58cb1dadd2140cbe9f14da8162f670fdd3598"} Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.133314 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33f4aa039e1a23a740faec1516b58cb1dadd2140cbe9f14da8162f670fdd3598" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.133079 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ghqf7" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.151627 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-strtq" event={"ID":"e9745dc7-7db8-47fd-9e70-4e88a4652c52","Type":"ContainerDied","Data":"87fc10838d38712abc0700110a168cc58474929f64ce758de00a081c73ad858e"} Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.151669 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-strtq" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.151679 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87fc10838d38712abc0700110a168cc58474929f64ce758de00a081c73ad858e" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.154750 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6239-account-create-update-sx4pg" event={"ID":"6cc0a76d-907f-4dd2-99be-8dcde78b34b6","Type":"ContainerDied","Data":"ba1fbc5119d22f38e6ada8198745ea1247e3f96684c234282dcf30f6997fb097"} Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.154896 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba1fbc5119d22f38e6ada8198745ea1247e3f96684c234282dcf30f6997fb097" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.154977 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6239-account-create-update-sx4pg" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.156673 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0fa-account-create-update-jg4lr" event={"ID":"e70a7564-5a51-4289-8f7f-22c3258a649a","Type":"ContainerDied","Data":"212008d65965f8d758fa5b78f36f5dfce5ef59303abd54afa332b1488bceadd0"} Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.156702 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="212008d65965f8d758fa5b78f36f5dfce5ef59303abd54afa332b1488bceadd0" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.156754 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0fa-account-create-update-jg4lr" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.163544 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3128-account-create-update-4n9hw" event={"ID":"e1902f70-49a1-454a-8f7c-e90e2aa9c8ea","Type":"ContainerDied","Data":"b2bcd28e2d62d078196250a2ad8fa2c0822379b9dff00180a2f0063666fee786"} Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.163576 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2bcd28e2d62d078196250a2ad8fa2c0822379b9dff00180a2f0063666fee786" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.163631 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3128-account-create-update-4n9hw" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.167935 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dlcdx" event={"ID":"e22e682c-f9d6-4ef0-a8ad-b87aea2ef852","Type":"ContainerDied","Data":"443d35c89a823db8d38f29c58022ba147b52dd0a366a0c1600f57a1fedaf0ffc"} Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.167980 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dlcdx" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.167988 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="443d35c89a823db8d38f29c58022ba147b52dd0a366a0c1600f57a1fedaf0ffc" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.769220 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-44kzh" podStartSLOduration=2.925445805 podStartE2EDuration="8.769200053s" podCreationTimestamp="2026-03-13 20:48:10 +0000 UTC" firstStartedPulling="2026-03-13 20:48:11.717214174 +0000 UTC m=+1251.733296567" lastFinishedPulling="2026-03-13 20:48:17.560968412 +0000 UTC m=+1257.577050815" observedRunningTime="2026-03-13 20:48:18.150302169 +0000 UTC m=+1258.166384572" watchObservedRunningTime="2026-03-13 20:48:18.769200053 +0000 UTC m=+1258.785282456" Mar 13 20:48:18 crc kubenswrapper[5029]: I0313 20:48:18.890302 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.005506 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cbld8"] Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.005767 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-cbld8" podUID="1b871cf9-26fb-481d-8404-9c767e53937c" containerName="dnsmasq-dns" containerID="cri-o://f9477643abaa9c7938b07177b1184fd19ebef5e684476848a80097c91612d7db" gracePeriod=10 Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.201619 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"f7640c02a537ef7fef2d9d101065f9b7fd58d658775c194f41933ffbec91d289"} Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.202082 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"56a46ae39f833da556e53a8e9e7f9d962ed1fb2b0f5b9c887fe52c40f6db70e5"} Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.206169 5029 generic.go:334] "Generic (PLEG): container finished" podID="1b871cf9-26fb-481d-8404-9c767e53937c" containerID="f9477643abaa9c7938b07177b1184fd19ebef5e684476848a80097c91612d7db" exitCode=0 Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.206399 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cbld8" event={"ID":"1b871cf9-26fb-481d-8404-9c767e53937c","Type":"ContainerDied","Data":"f9477643abaa9c7938b07177b1184fd19ebef5e684476848a80097c91612d7db"} Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.461888 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.492191 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g65h5\" (UniqueName: \"kubernetes.io/projected/1b871cf9-26fb-481d-8404-9c767e53937c-kube-api-access-g65h5\") pod \"1b871cf9-26fb-481d-8404-9c767e53937c\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.492247 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-nb\") pod \"1b871cf9-26fb-481d-8404-9c767e53937c\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.492303 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-config\") pod \"1b871cf9-26fb-481d-8404-9c767e53937c\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.492327 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-dns-svc\") pod \"1b871cf9-26fb-481d-8404-9c767e53937c\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.492407 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-sb\") pod \"1b871cf9-26fb-481d-8404-9c767e53937c\" (UID: \"1b871cf9-26fb-481d-8404-9c767e53937c\") " Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.519666 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b871cf9-26fb-481d-8404-9c767e53937c-kube-api-access-g65h5" (OuterVolumeSpecName: "kube-api-access-g65h5") pod "1b871cf9-26fb-481d-8404-9c767e53937c" (UID: "1b871cf9-26fb-481d-8404-9c767e53937c"). InnerVolumeSpecName "kube-api-access-g65h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.566117 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b871cf9-26fb-481d-8404-9c767e53937c" (UID: "1b871cf9-26fb-481d-8404-9c767e53937c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.595270 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g65h5\" (UniqueName: \"kubernetes.io/projected/1b871cf9-26fb-481d-8404-9c767e53937c-kube-api-access-g65h5\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.595313 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.618458 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b871cf9-26fb-481d-8404-9c767e53937c" (UID: "1b871cf9-26fb-481d-8404-9c767e53937c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.634007 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b871cf9-26fb-481d-8404-9c767e53937c" (UID: "1b871cf9-26fb-481d-8404-9c767e53937c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.654335 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-config" (OuterVolumeSpecName: "config") pod "1b871cf9-26fb-481d-8404-9c767e53937c" (UID: "1b871cf9-26fb-481d-8404-9c767e53937c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.696725 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.696768 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:19 crc kubenswrapper[5029]: I0313 20:48:19.696783 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b871cf9-26fb-481d-8404-9c767e53937c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.254239 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cbld8" event={"ID":"1b871cf9-26fb-481d-8404-9c767e53937c","Type":"ContainerDied","Data":"6cef8bbadfcf3f1637ceb7fcd3c8eaf0ce98252ff7d27d48c8dce9c9af75214b"} Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.255750 5029 scope.go:117] "RemoveContainer" containerID="f9477643abaa9c7938b07177b1184fd19ebef5e684476848a80097c91612d7db" Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.254305 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cbld8" Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.266686 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"91eb35fb33314860f283544572ae8593a40cb907ab3734906176af8dff65c012"} Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.266838 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"6eb6f7f621dac4fbe520f0af81436a6f943dc7e9eaaef4384522f0e1d0e26d51"} Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.266951 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"cd3ef3beb630a1f13a129e23043eef2d403371e3a1226ea2451fc12728b1b5fc"} Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.267021 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"74da1aecb1468e1dcd8a3e5dcfa86b551ebb26775bb6145d906c035f572ae172"} Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.281736 5029 scope.go:117] "RemoveContainer" containerID="5723ba9a162a9754a90e4fef1199e39f3b33753f8e56d6d546bc211daf54d0a4" Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.334472 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cbld8"] Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.342480 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cbld8"] Mar 13 20:48:20 crc kubenswrapper[5029]: I0313 20:48:20.646397 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b871cf9-26fb-481d-8404-9c767e53937c" path="/var/lib/kubelet/pods/1b871cf9-26fb-481d-8404-9c767e53937c/volumes" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.282284 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81a1e5be-bbdf-4a80-a209-3acb956f5c86","Type":"ContainerStarted","Data":"60dca02f7c3079968911981441bffeef7d43a39fb86da75d6253b3b5a0410c8a"} Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.323677 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.441989468 podStartE2EDuration="49.323651582s" podCreationTimestamp="2026-03-13 20:47:32 +0000 UTC" firstStartedPulling="2026-03-13 20:48:06.380649712 +0000 UTC m=+1246.396732115" lastFinishedPulling="2026-03-13 20:48:18.262311826 +0000 UTC m=+1258.278394229" observedRunningTime="2026-03-13 20:48:21.314267415 +0000 UTC m=+1261.330349828" watchObservedRunningTime="2026-03-13 20:48:21.323651582 +0000 UTC m=+1261.339733985" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.618447 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-qvb92"] Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.618816 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc0a76d-907f-4dd2-99be-8dcde78b34b6" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.618834 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc0a76d-907f-4dd2-99be-8dcde78b34b6" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.618886 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9745dc7-7db8-47fd-9e70-4e88a4652c52" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.618893 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9745dc7-7db8-47fd-9e70-4e88a4652c52" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.618912 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70a7564-5a51-4289-8f7f-22c3258a649a" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.618920 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70a7564-5a51-4289-8f7f-22c3258a649a" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.618939 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da3ca91-2523-4bbc-9ee8-5957e040e522" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.618946 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da3ca91-2523-4bbc-9ee8-5957e040e522" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.618954 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4584e142-8670-4bae-a757-fcbe7cb3e614" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.618960 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4584e142-8670-4bae-a757-fcbe7cb3e614" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.618975 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22e682c-f9d6-4ef0-a8ad-b87aea2ef852" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.618982 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22e682c-f9d6-4ef0-a8ad-b87aea2ef852" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.619012 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b871cf9-26fb-481d-8404-9c767e53937c" containerName="init" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619019 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b871cf9-26fb-481d-8404-9c767e53937c" containerName="init" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.619035 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1902f70-49a1-454a-8f7c-e90e2aa9c8ea" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619042 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1902f70-49a1-454a-8f7c-e90e2aa9c8ea" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.619048 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b871cf9-26fb-481d-8404-9c767e53937c" containerName="dnsmasq-dns" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619057 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b871cf9-26fb-481d-8404-9c767e53937c" containerName="dnsmasq-dns" Mar 13 20:48:21 crc kubenswrapper[5029]: E0313 20:48:21.619064 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba61da6-5905-40c4-bdf7-dcd9b5e622f1" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619072 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba61da6-5905-40c4-bdf7-dcd9b5e622f1" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619223 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4584e142-8670-4bae-a757-fcbe7cb3e614" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619234 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70a7564-5a51-4289-8f7f-22c3258a649a" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619249 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da3ca91-2523-4bbc-9ee8-5957e040e522" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619260 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22e682c-f9d6-4ef0-a8ad-b87aea2ef852" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619274 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc0a76d-907f-4dd2-99be-8dcde78b34b6" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619284 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba61da6-5905-40c4-bdf7-dcd9b5e622f1" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619295 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9745dc7-7db8-47fd-9e70-4e88a4652c52" containerName="mariadb-database-create" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619305 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1902f70-49a1-454a-8f7c-e90e2aa9c8ea" containerName="mariadb-account-create-update" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.619313 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b871cf9-26fb-481d-8404-9c767e53937c" containerName="dnsmasq-dns" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.620145 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.622430 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.654246 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-qvb92"] Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.733208 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.733291 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.733521 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.733606 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-config\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.733685 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfwgr\" (UniqueName: \"kubernetes.io/projected/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-kube-api-access-nfwgr\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.733749 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.835018 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.835082 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.835145 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.835166 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-config\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.835201 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfwgr\" (UniqueName: \"kubernetes.io/projected/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-kube-api-access-nfwgr\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.835232 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.836302 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.836331 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.836450 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-config\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.836730 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.836925 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.855061 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfwgr\" (UniqueName: \"kubernetes.io/projected/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-kube-api-access-nfwgr\") pod \"dnsmasq-dns-74f6bcbc87-qvb92\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:21 crc kubenswrapper[5029]: I0313 20:48:21.941254 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:22 crc kubenswrapper[5029]: I0313 20:48:22.292834 5029 generic.go:334] "Generic (PLEG): container finished" podID="73fbf5bd-1541-450a-be13-daf65ce110ac" containerID="cc61ac623e976451fdba84b055dbdfc2202ed91f2f114dd2ac339ec80e7dcc45" exitCode=0 Mar 13 20:48:22 crc kubenswrapper[5029]: I0313 20:48:22.292893 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44kzh" event={"ID":"73fbf5bd-1541-450a-be13-daf65ce110ac","Type":"ContainerDied","Data":"cc61ac623e976451fdba84b055dbdfc2202ed91f2f114dd2ac339ec80e7dcc45"} Mar 13 20:48:22 crc kubenswrapper[5029]: I0313 20:48:22.439725 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-qvb92"] Mar 13 20:48:22 crc kubenswrapper[5029]: W0313 20:48:22.441639 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ede6456_3d78_4ab5_8a0b_5c83a6e85a40.slice/crio-b0b8367cdd971b65240e6c0216c24bda15ba43b49d300c9bf0f347f1c5efaa5d WatchSource:0}: Error finding container b0b8367cdd971b65240e6c0216c24bda15ba43b49d300c9bf0f347f1c5efaa5d: Status 404 returned error can't find the container with id b0b8367cdd971b65240e6c0216c24bda15ba43b49d300c9bf0f347f1c5efaa5d Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.302838 5029 generic.go:334] "Generic (PLEG): container finished" podID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" containerID="a22528a3261d14bad35a46228a83def5fc213c5f1e8cfb9213be72093f66b90f" exitCode=0 Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.302981 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" event={"ID":"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40","Type":"ContainerDied","Data":"a22528a3261d14bad35a46228a83def5fc213c5f1e8cfb9213be72093f66b90f"} Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.303332 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" event={"ID":"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40","Type":"ContainerStarted","Data":"b0b8367cdd971b65240e6c0216c24bda15ba43b49d300c9bf0f347f1c5efaa5d"} Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.597574 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.677719 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr7x4\" (UniqueName: \"kubernetes.io/projected/73fbf5bd-1541-450a-be13-daf65ce110ac-kube-api-access-lr7x4\") pod \"73fbf5bd-1541-450a-be13-daf65ce110ac\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.677817 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-config-data\") pod \"73fbf5bd-1541-450a-be13-daf65ce110ac\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.678025 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-combined-ca-bundle\") pod \"73fbf5bd-1541-450a-be13-daf65ce110ac\" (UID: \"73fbf5bd-1541-450a-be13-daf65ce110ac\") " Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.683089 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fbf5bd-1541-450a-be13-daf65ce110ac-kube-api-access-lr7x4" (OuterVolumeSpecName: "kube-api-access-lr7x4") pod "73fbf5bd-1541-450a-be13-daf65ce110ac" (UID: "73fbf5bd-1541-450a-be13-daf65ce110ac"). InnerVolumeSpecName "kube-api-access-lr7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.700562 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73fbf5bd-1541-450a-be13-daf65ce110ac" (UID: "73fbf5bd-1541-450a-be13-daf65ce110ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.719827 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-config-data" (OuterVolumeSpecName: "config-data") pod "73fbf5bd-1541-450a-be13-daf65ce110ac" (UID: "73fbf5bd-1541-450a-be13-daf65ce110ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.780653 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.780715 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr7x4\" (UniqueName: \"kubernetes.io/projected/73fbf5bd-1541-450a-be13-daf65ce110ac-kube-api-access-lr7x4\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:23 crc kubenswrapper[5029]: I0313 20:48:23.780733 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fbf5bd-1541-450a-be13-daf65ce110ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.314316 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44kzh" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.314315 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44kzh" event={"ID":"73fbf5bd-1541-450a-be13-daf65ce110ac","Type":"ContainerDied","Data":"3965d053b896861703c450b2cc8adc402fbb06f7b2156161fbaf67e9c7f0f025"} Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.314430 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3965d053b896861703c450b2cc8adc402fbb06f7b2156161fbaf67e9c7f0f025" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.317836 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" event={"ID":"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40","Type":"ContainerStarted","Data":"e1c99da4e22f48053658d223393c6b4bcbeb57a706ab09a205e5e2097466946e"} Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.318201 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.347985 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" podStartSLOduration=3.347961686 podStartE2EDuration="3.347961686s" podCreationTimestamp="2026-03-13 20:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:48:24.343188297 +0000 UTC m=+1264.359270720" watchObservedRunningTime="2026-03-13 20:48:24.347961686 +0000 UTC m=+1264.364044089" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.577921 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-qvb92"] Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.621922 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2vbsv"] Mar 13 20:48:24 crc kubenswrapper[5029]: E0313 20:48:24.622400 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fbf5bd-1541-450a-be13-daf65ce110ac" containerName="keystone-db-sync" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.622421 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fbf5bd-1541-450a-be13-daf65ce110ac" containerName="keystone-db-sync" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.622594 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fbf5bd-1541-450a-be13-daf65ce110ac" containerName="keystone-db-sync" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.626644 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.636015 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r8cx9"] Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.639569 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.643913 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.644136 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.644285 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.644422 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qpzzs" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.644581 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.654615 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2vbsv"] Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.670458 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r8cx9"] Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805207 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6zp\" (UniqueName: \"kubernetes.io/projected/ece6b7c1-6647-4131-821c-889f5504b402-kube-api-access-dt6zp\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805266 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5m8\" (UniqueName: \"kubernetes.io/projected/0f5a8379-377e-403c-a29b-cb80913e1ad9-kube-api-access-ct5m8\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805293 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805349 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-combined-ca-bundle\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805367 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-config-data\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805388 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805408 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-fernet-keys\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805465 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-scripts\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805519 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805539 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-credential-keys\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805554 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-config\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.805568 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.823400 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75cff898d9-qm9m6"] Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.824719 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.835402 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.835983 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.836082 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-l774r" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.836226 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.858485 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75cff898d9-qm9m6"] Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.906401 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xhhzb"] Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.907585 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct5m8\" (UniqueName: \"kubernetes.io/projected/0f5a8379-377e-403c-a29b-cb80913e1ad9-kube-api-access-ct5m8\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.907635 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.907683 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2fa20b-b10c-4818-b493-705c299a1982-logs\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.907722 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.907722 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-config-data\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908337 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-combined-ca-bundle\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908368 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-config-data\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908397 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908422 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-fernet-keys\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908528 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-scripts\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908592 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2fa20b-b10c-4818-b493-705c299a1982-horizon-secret-key\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908622 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908656 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-scripts\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908682 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-config\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908707 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-credential-keys\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908730 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908760 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6zp\" (UniqueName: \"kubernetes.io/projected/ece6b7c1-6647-4131-821c-889f5504b402-kube-api-access-dt6zp\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908784 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnqwh\" (UniqueName: \"kubernetes.io/projected/ab2fa20b-b10c-4818-b493-705c299a1982-kube-api-access-rnqwh\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.908954 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.911297 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.913830 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-combined-ca-bundle\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.914626 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-config\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.916289 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.916459 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vvsl5" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.916579 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.917248 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.917573 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-credential-keys\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.918190 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.921290 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-config-data\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.921315 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-fernet-keys\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.923130 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-scripts\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.932620 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xhhzb"] Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.984177 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct5m8\" (UniqueName: \"kubernetes.io/projected/0f5a8379-377e-403c-a29b-cb80913e1ad9-kube-api-access-ct5m8\") pod \"keystone-bootstrap-r8cx9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:24 crc kubenswrapper[5029]: I0313 20:48:24.991584 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6zp\" (UniqueName: \"kubernetes.io/projected/ece6b7c1-6647-4131-821c-889f5504b402-kube-api-access-dt6zp\") pod \"dnsmasq-dns-847c4cc679-2vbsv\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.023932 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a13c03-b012-4416-bb5b-3ff21417290a-etc-machine-id\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.023972 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-scripts\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024008 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2fa20b-b10c-4818-b493-705c299a1982-logs\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024027 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-combined-ca-bundle\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024053 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-config-data\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024137 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-db-sync-config-data\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024156 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9plv\" (UniqueName: \"kubernetes.io/projected/e5a13c03-b012-4416-bb5b-3ff21417290a-kube-api-access-x9plv\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024173 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2fa20b-b10c-4818-b493-705c299a1982-horizon-secret-key\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024188 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-config-data\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024207 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-scripts\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024231 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnqwh\" (UniqueName: \"kubernetes.io/projected/ab2fa20b-b10c-4818-b493-705c299a1982-kube-api-access-rnqwh\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.024929 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2fa20b-b10c-4818-b493-705c299a1982-logs\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.025902 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-config-data\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.033402 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-scripts\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.040373 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2fa20b-b10c-4818-b493-705c299a1982-horizon-secret-key\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.084000 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2vbsv"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.084717 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.124931 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qdq6p"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.126336 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a13c03-b012-4416-bb5b-3ff21417290a-etc-machine-id\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.126364 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-scripts\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.126397 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-combined-ca-bundle\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.126501 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-db-sync-config-data\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.126519 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9plv\" (UniqueName: \"kubernetes.io/projected/e5a13c03-b012-4416-bb5b-3ff21417290a-kube-api-access-x9plv\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.126535 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-config-data\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.128344 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.130607 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnqwh\" (UniqueName: \"kubernetes.io/projected/ab2fa20b-b10c-4818-b493-705c299a1982-kube-api-access-rnqwh\") pod \"horizon-75cff898d9-qm9m6\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.131027 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a13c03-b012-4416-bb5b-3ff21417290a-etc-machine-id\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.153523 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qdq6p"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.153681 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.153771 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.154000 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qr96x" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.155619 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.175340 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-config-data\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.175379 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9plv\" (UniqueName: \"kubernetes.io/projected/e5a13c03-b012-4416-bb5b-3ff21417290a-kube-api-access-x9plv\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.179660 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-combined-ca-bundle\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.182184 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-scripts\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.191646 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-db-sync-config-data\") pod \"cinder-db-sync-xhhzb\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.200623 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.202692 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.216355 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.217027 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.221932 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.233315 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-combined-ca-bundle\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.233399 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-config\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.233472 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skjwt\" (UniqueName: \"kubernetes.io/projected/4cd74a89-871d-499c-9362-d2ee8713147a-kube-api-access-skjwt\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.245957 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rkc9f"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.247479 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.268872 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rkc9f"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.284067 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.316789 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85ff45d975-bg6kz"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.318618 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.334116 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qcjtl"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335080 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335128 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-combined-ca-bundle\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335308 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335410 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-config\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335437 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-run-httpd\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335483 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335487 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335531 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpbb2\" (UniqueName: \"kubernetes.io/projected/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-kube-api-access-bpbb2\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335551 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335633 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-config\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335694 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-config-data\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335712 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-scripts\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335764 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv4ng\" (UniqueName: \"kubernetes.io/projected/eb18b58b-6d93-4ca4-b191-161234269f8b-kube-api-access-nv4ng\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.335903 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skjwt\" (UniqueName: \"kubernetes.io/projected/4cd74a89-871d-499c-9362-d2ee8713147a-kube-api-access-skjwt\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.336093 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.336178 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.336321 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-log-httpd\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.344514 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.345334 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.347278 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xx5k2" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.350460 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-config\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.351179 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-combined-ca-bundle\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.374566 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skjwt\" (UniqueName: \"kubernetes.io/projected/4cd74a89-871d-499c-9362-d2ee8713147a-kube-api-access-skjwt\") pod \"neutron-db-sync-qdq6p\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.380405 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-76l7z"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.383174 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.389189 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.389406 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-hr46l" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.413948 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h5hp5"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.415541 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.418814 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-c6vdb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.419157 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.424682 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85ff45d975-bg6kz"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437721 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-run-httpd\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437764 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437789 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437806 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpbb2\" (UniqueName: \"kubernetes.io/projected/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-kube-api-access-bpbb2\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437826 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-config\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437862 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-config-data\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437881 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-scripts\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437901 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv4ng\" (UniqueName: \"kubernetes.io/projected/eb18b58b-6d93-4ca4-b191-161234269f8b-kube-api-access-nv4ng\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437934 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vdw\" (UniqueName: \"kubernetes.io/projected/c75c1c18-27e6-4fae-bf58-03387b32e4f3-kube-api-access-m5vdw\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437953 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-scripts\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.437993 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c1c18-27e6-4fae-bf58-03387b32e4f3-logs\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438018 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-config-data\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438039 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438067 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438093 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd0573dc-2006-4faa-9286-c7743e50e702-horizon-secret-key\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438120 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-scripts\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438152 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-log-httpd\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438181 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-combined-ca-bundle\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438205 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7fdc\" (UniqueName: \"kubernetes.io/projected/dd0573dc-2006-4faa-9286-c7743e50e702-kube-api-access-d7fdc\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438231 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-config-data\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438311 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438354 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0573dc-2006-4faa-9286-c7743e50e702-logs\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.438378 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.440618 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.441127 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.442199 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-log-httpd\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.441128 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-run-httpd\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.442408 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-76l7z"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.443985 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.445469 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.446869 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-config\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.448382 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.455414 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-scripts\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.455524 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.460379 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-config-data\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.469653 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpbb2\" (UniqueName: \"kubernetes.io/projected/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-kube-api-access-bpbb2\") pod \"ceilometer-0\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.477156 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h5hp5"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.487502 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv4ng\" (UniqueName: \"kubernetes.io/projected/eb18b58b-6d93-4ca4-b191-161234269f8b-kube-api-access-nv4ng\") pod \"dnsmasq-dns-785d8bcb8c-rkc9f\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.488420 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qcjtl"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.493624 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.500734 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.506279 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.508061 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.513605 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4g67r" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.513908 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.514133 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.515936 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.516082 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540088 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-combined-ca-bundle\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540133 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-combined-ca-bundle\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540164 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vdw\" (UniqueName: \"kubernetes.io/projected/c75c1c18-27e6-4fae-bf58-03387b32e4f3-kube-api-access-m5vdw\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540183 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-scripts\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540206 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjb78\" (UniqueName: \"kubernetes.io/projected/e27175d1-38d4-4709-9d98-b71adc445f02-kube-api-access-xjb78\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540232 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c1c18-27e6-4fae-bf58-03387b32e4f3-logs\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540247 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd5cj\" (UniqueName: \"kubernetes.io/projected/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-kube-api-access-gd5cj\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540270 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-config-data\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540298 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd0573dc-2006-4faa-9286-c7743e50e702-horizon-secret-key\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540322 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-scripts\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540356 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-combined-ca-bundle\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540373 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-db-sync-config-data\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540391 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7fdc\" (UniqueName: \"kubernetes.io/projected/dd0573dc-2006-4faa-9286-c7743e50e702-kube-api-access-d7fdc\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540412 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-config-data\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540433 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-config-data\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540462 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0573dc-2006-4faa-9286-c7743e50e702-logs\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.540479 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-job-config-data\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.547129 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-scripts\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.549872 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0573dc-2006-4faa-9286-c7743e50e702-logs\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.552315 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd0573dc-2006-4faa-9286-c7743e50e702-horizon-secret-key\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.554816 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c1c18-27e6-4fae-bf58-03387b32e4f3-logs\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.556592 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-config-data\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.562077 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-config-data\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.565330 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-scripts\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.569065 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-combined-ca-bundle\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.572070 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.576748 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vdw\" (UniqueName: \"kubernetes.io/projected/c75c1c18-27e6-4fae-bf58-03387b32e4f3-kube-api-access-m5vdw\") pod \"placement-db-sync-qcjtl\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.578556 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7fdc\" (UniqueName: \"kubernetes.io/projected/dd0573dc-2006-4faa-9286-c7743e50e702-kube-api-access-d7fdc\") pod \"horizon-85ff45d975-bg6kz\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.606796 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.628121 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.649006 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjb78\" (UniqueName: \"kubernetes.io/projected/e27175d1-38d4-4709-9d98-b71adc445f02-kube-api-access-xjb78\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.649279 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd5cj\" (UniqueName: \"kubernetes.io/projected/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-kube-api-access-gd5cj\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.649387 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vjd\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-kube-api-access-75vjd\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.649513 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.649607 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.649702 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.649787 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-config-data\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.649919 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-logs\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.650024 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-db-sync-config-data\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.650121 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-config-data\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.650201 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.650298 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-ceph\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.650387 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-job-config-data\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.650494 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-scripts\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.650575 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-combined-ca-bundle\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.650643 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-combined-ca-bundle\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.654908 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-db-sync-config-data\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.655386 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-combined-ca-bundle\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.656951 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.657880 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-job-config-data\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.658586 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-config-data\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.675157 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qcjtl" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.679226 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-combined-ca-bundle\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.683457 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjb78\" (UniqueName: \"kubernetes.io/projected/e27175d1-38d4-4709-9d98-b71adc445f02-kube-api-access-xjb78\") pod \"manila-db-sync-76l7z\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.691607 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd5cj\" (UniqueName: \"kubernetes.io/projected/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-kube-api-access-gd5cj\") pod \"barbican-db-sync-h5hp5\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.752889 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vjd\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-kube-api-access-75vjd\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.753205 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.753228 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.753249 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.753273 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-config-data\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.753299 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-logs\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.753324 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.753348 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-ceph\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.753403 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-scripts\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.755168 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-logs\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.756166 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.757817 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.765870 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-config-data\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.766814 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.767071 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.767244 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-scripts\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.783717 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-ceph\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.784681 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vjd\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-kube-api-access-75vjd\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.794744 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-76l7z" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.838678 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.914921 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:25 crc kubenswrapper[5029]: I0313 20:48:25.935531 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2vbsv"] Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.131640 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.133697 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.142332 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.142529 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.144230 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.169360 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.194791 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75cff898d9-qm9m6"] Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289338 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289411 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289446 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289475 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-logs\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289495 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289539 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289562 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289586 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.289646 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz5ng\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-kube-api-access-gz5ng\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.378587 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cff898d9-qm9m6" event={"ID":"ab2fa20b-b10c-4818-b493-705c299a1982","Type":"ContainerStarted","Data":"216a5b4d58bef2db71fca4b28e377fab251231b15ce563431b7f64ebbc018210"} Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.379761 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" podUID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" containerName="dnsmasq-dns" containerID="cri-o://e1c99da4e22f48053658d223393c6b4bcbeb57a706ab09a205e5e2097466946e" gracePeriod=10 Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.380064 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" event={"ID":"ece6b7c1-6647-4131-821c-889f5504b402","Type":"ContainerStarted","Data":"2734d6eedffd573328c3ccc03dae27de8efca46fc947fca09d16b431ac82619c"} Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.390736 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz5ng\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-kube-api-access-gz5ng\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.390823 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.390916 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.390946 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.390973 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-logs\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.390996 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.391053 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.391084 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.391107 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.393294 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-logs\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.393714 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.394343 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.402450 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.408640 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.409080 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.409882 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.413331 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.416601 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz5ng\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-kube-api-access-gz5ng\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.460075 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.715406 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.744800 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qdq6p"] Mar 13 20:48:26 crc kubenswrapper[5029]: W0313 20:48:26.768883 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5a13c03_b012_4416_bb5b_3ff21417290a.slice/crio-17c8280991d15e962bd2f31bc6c3678400794de8ec3a7bd2bd7786be512920e7 WatchSource:0}: Error finding container 17c8280991d15e962bd2f31bc6c3678400794de8ec3a7bd2bd7786be512920e7: Status 404 returned error can't find the container with id 17c8280991d15e962bd2f31bc6c3678400794de8ec3a7bd2bd7786be512920e7 Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.769891 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xhhzb"] Mar 13 20:48:26 crc kubenswrapper[5029]: W0313 20:48:26.799151 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb18b58b_6d93_4ca4_b191_161234269f8b.slice/crio-6f2d27bed78aa83d46924117d8d88af3e6e68d245089ec303b96f5e2eea7119e WatchSource:0}: Error finding container 6f2d27bed78aa83d46924117d8d88af3e6e68d245089ec303b96f5e2eea7119e: Status 404 returned error can't find the container with id 6f2d27bed78aa83d46924117d8d88af3e6e68d245089ec303b96f5e2eea7119e Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.801006 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rkc9f"] Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.817298 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r8cx9"] Mar 13 20:48:26 crc kubenswrapper[5029]: I0313 20:48:26.857706 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:48:26 crc kubenswrapper[5029]: E0313 20:48:26.884428 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece6b7c1_6647_4131_821c_889f5504b402.slice/crio-cfdc9830108d545f11c808301fc4b6bb40cf341008d0898c38ecbe569f0e98d5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece6b7c1_6647_4131_821c_889f5504b402.slice/crio-conmon-cfdc9830108d545f11c808301fc4b6bb40cf341008d0898c38ecbe569f0e98d5.scope\": RecentStats: unable to find data in memory cache]" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.166770 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85ff45d975-bg6kz"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.194566 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:48:27 crc kubenswrapper[5029]: W0313 20:48:27.211375 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd0573dc_2006_4faa_9286_c7743e50e702.slice/crio-58ff262db35a5aaa99b307c05af52eef41a99f93cd312f7694c2d0478d4dfd42 WatchSource:0}: Error finding container 58ff262db35a5aaa99b307c05af52eef41a99f93cd312f7694c2d0478d4dfd42: Status 404 returned error can't find the container with id 58ff262db35a5aaa99b307c05af52eef41a99f93cd312f7694c2d0478d4dfd42 Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.251737 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h5hp5"] Mar 13 20:48:27 crc kubenswrapper[5029]: W0313 20:48:27.267284 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5243e50_28ff_4f5c_aeb1_97a87b1f2765.slice/crio-ba74b0044d5e1eb920e205fed6eaa760a4a5d87f7967ca56b5035ea19baa1c5a WatchSource:0}: Error finding container ba74b0044d5e1eb920e205fed6eaa760a4a5d87f7967ca56b5035ea19baa1c5a: Status 404 returned error can't find the container with id ba74b0044d5e1eb920e205fed6eaa760a4a5d87f7967ca56b5035ea19baa1c5a Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.297400 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75cff898d9-qm9m6"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.337910 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65ffd59b99-lqljn"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.339794 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.359560 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.457736 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qcjtl"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.476647 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddd9\" (UniqueName: \"kubernetes.io/projected/d133172a-0047-408d-8f55-270a9f6462ca-kube-api-access-xddd9\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.476754 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d133172a-0047-408d-8f55-270a9f6462ca-horizon-secret-key\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.476989 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d133172a-0047-408d-8f55-270a9f6462ca-logs\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.477468 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-scripts\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.477641 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-config-data\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.481716 5029 generic.go:334] "Generic (PLEG): container finished" podID="ece6b7c1-6647-4131-821c-889f5504b402" containerID="cfdc9830108d545f11c808301fc4b6bb40cf341008d0898c38ecbe569f0e98d5" exitCode=0 Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.481826 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" event={"ID":"ece6b7c1-6647-4131-821c-889f5504b402","Type":"ContainerDied","Data":"cfdc9830108d545f11c808301fc4b6bb40cf341008d0898c38ecbe569f0e98d5"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.511603 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65ffd59b99-lqljn"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.523376 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r8cx9" event={"ID":"0f5a8379-377e-403c-a29b-cb80913e1ad9","Type":"ContainerStarted","Data":"dc135bb077fcebcafa2832cd30c4a08d51d244f5d67717a81faed85a4cd86d2c"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.524399 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" event={"ID":"eb18b58b-6d93-4ca4-b191-161234269f8b","Type":"ContainerStarted","Data":"6f2d27bed78aa83d46924117d8d88af3e6e68d245089ec303b96f5e2eea7119e"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.541601 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"160773c1-ebe6-4b3b-b26d-5745cbf9ef70","Type":"ContainerStarted","Data":"026e69edd8af455c0839b69cce479b743095fd7f732c9854a6623864a6528622"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.559454 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85ff45d975-bg6kz" event={"ID":"dd0573dc-2006-4faa-9286-c7743e50e702","Type":"ContainerStarted","Data":"58ff262db35a5aaa99b307c05af52eef41a99f93cd312f7694c2d0478d4dfd42"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.562029 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xhhzb" event={"ID":"e5a13c03-b012-4416-bb5b-3ff21417290a","Type":"ContainerStarted","Data":"17c8280991d15e962bd2f31bc6c3678400794de8ec3a7bd2bd7786be512920e7"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.565891 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-76l7z"] Mar 13 20:48:27 crc kubenswrapper[5029]: W0313 20:48:27.569797 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode27175d1_38d4_4709_9d98_b71adc445f02.slice/crio-3bf6a61572d4223f551e3c5cdb691e15a339647324a035da8b7bdc4a29420c3e WatchSource:0}: Error finding container 3bf6a61572d4223f551e3c5cdb691e15a339647324a035da8b7bdc4a29420c3e: Status 404 returned error can't find the container with id 3bf6a61572d4223f551e3c5cdb691e15a339647324a035da8b7bdc4a29420c3e Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.571800 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qdq6p" event={"ID":"4cd74a89-871d-499c-9362-d2ee8713147a","Type":"ContainerStarted","Data":"04ee3fea4ffc1549f184daf107e1fff3687ed4e0851c111aff0ced667c1cfbef"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.579751 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-scripts\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.579803 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-config-data\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.579874 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xddd9\" (UniqueName: \"kubernetes.io/projected/d133172a-0047-408d-8f55-270a9f6462ca-kube-api-access-xddd9\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.579900 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d133172a-0047-408d-8f55-270a9f6462ca-horizon-secret-key\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.579936 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d133172a-0047-408d-8f55-270a9f6462ca-logs\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.580416 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d133172a-0047-408d-8f55-270a9f6462ca-logs\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.582064 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-scripts\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.585378 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-config-data\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.592571 5029 generic.go:334] "Generic (PLEG): container finished" podID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" containerID="e1c99da4e22f48053658d223393c6b4bcbeb57a706ab09a205e5e2097466946e" exitCode=0 Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.592665 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" event={"ID":"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40","Type":"ContainerDied","Data":"e1c99da4e22f48053658d223393c6b4bcbeb57a706ab09a205e5e2097466946e"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.597094 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.597358 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.598429 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d133172a-0047-408d-8f55-270a9f6462ca-horizon-secret-key\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.605199 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5hp5" event={"ID":"a5243e50-28ff-4f5c-aeb1-97a87b1f2765","Type":"ContainerStarted","Data":"ba74b0044d5e1eb920e205fed6eaa760a4a5d87f7967ca56b5035ea19baa1c5a"} Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.609512 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.616490 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xddd9\" (UniqueName: \"kubernetes.io/projected/d133172a-0047-408d-8f55-270a9f6462ca-kube-api-access-xddd9\") pod \"horizon-65ffd59b99-lqljn\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.684361 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-config\") pod \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.684523 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-nb\") pod \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.684662 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-swift-storage-0\") pod \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.684742 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-svc\") pod \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.684821 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-sb\") pod \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.684925 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfwgr\" (UniqueName: \"kubernetes.io/projected/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-kube-api-access-nfwgr\") pod \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\" (UID: \"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40\") " Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.704134 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-kube-api-access-nfwgr" (OuterVolumeSpecName: "kube-api-access-nfwgr") pod "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" (UID: "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40"). InnerVolumeSpecName "kube-api-access-nfwgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.735178 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.761244 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" (UID: "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.778513 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" (UID: "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.789012 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.795383 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.795611 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.795779 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfwgr\" (UniqueName: \"kubernetes.io/projected/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-kube-api-access-nfwgr\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.826012 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-config" (OuterVolumeSpecName: "config") pod "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" (UID: "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.871955 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" (UID: "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.902625 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.902673 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:27 crc kubenswrapper[5029]: I0313 20:48:27.918699 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" (UID: "1ede6456-3d78-4ab5-8a0b-5c83a6e85a40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.011081 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.167112 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.319907 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt6zp\" (UniqueName: \"kubernetes.io/projected/ece6b7c1-6647-4131-821c-889f5504b402-kube-api-access-dt6zp\") pod \"ece6b7c1-6647-4131-821c-889f5504b402\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.319985 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-swift-storage-0\") pod \"ece6b7c1-6647-4131-821c-889f5504b402\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.320034 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-config\") pod \"ece6b7c1-6647-4131-821c-889f5504b402\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.320171 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-sb\") pod \"ece6b7c1-6647-4131-821c-889f5504b402\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.320226 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-svc\") pod \"ece6b7c1-6647-4131-821c-889f5504b402\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.320278 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-nb\") pod \"ece6b7c1-6647-4131-821c-889f5504b402\" (UID: \"ece6b7c1-6647-4131-821c-889f5504b402\") " Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.326652 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece6b7c1-6647-4131-821c-889f5504b402-kube-api-access-dt6zp" (OuterVolumeSpecName: "kube-api-access-dt6zp") pod "ece6b7c1-6647-4131-821c-889f5504b402" (UID: "ece6b7c1-6647-4131-821c-889f5504b402"). InnerVolumeSpecName "kube-api-access-dt6zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.346979 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-config" (OuterVolumeSpecName: "config") pod "ece6b7c1-6647-4131-821c-889f5504b402" (UID: "ece6b7c1-6647-4131-821c-889f5504b402"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.352097 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ece6b7c1-6647-4131-821c-889f5504b402" (UID: "ece6b7c1-6647-4131-821c-889f5504b402"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.354015 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ece6b7c1-6647-4131-821c-889f5504b402" (UID: "ece6b7c1-6647-4131-821c-889f5504b402"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.359416 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ece6b7c1-6647-4131-821c-889f5504b402" (UID: "ece6b7c1-6647-4131-821c-889f5504b402"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.395151 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ece6b7c1-6647-4131-821c-889f5504b402" (UID: "ece6b7c1-6647-4131-821c-889f5504b402"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.407658 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65ffd59b99-lqljn"] Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.422958 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.422998 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.423007 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.423018 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt6zp\" (UniqueName: \"kubernetes.io/projected/ece6b7c1-6647-4131-821c-889f5504b402-kube-api-access-dt6zp\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.423027 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.423036 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece6b7c1-6647-4131-821c-889f5504b402-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.655696 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.672763 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qdq6p" podStartSLOduration=3.67273473 podStartE2EDuration="3.67273473s" podCreationTimestamp="2026-03-13 20:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:48:28.661177785 +0000 UTC m=+1268.677260188" watchObservedRunningTime="2026-03-13 20:48:28.67273473 +0000 UTC m=+1268.688817133" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.672966 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.679036 5029 generic.go:334] "Generic (PLEG): container finished" podID="eb18b58b-6d93-4ca4-b191-161234269f8b" containerID="ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff" exitCode=0 Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.755947 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r8cx9" podStartSLOduration=4.755926341 podStartE2EDuration="4.755926341s" podCreationTimestamp="2026-03-13 20:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:48:28.743805011 +0000 UTC m=+1268.759887414" watchObservedRunningTime="2026-03-13 20:48:28.755926341 +0000 UTC m=+1268.772008734" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.765271 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293","Type":"ContainerStarted","Data":"5c94e0cb2dc7e450d27d63ed5cd65ec7e00e0d53042419f19cfce092a63a9914"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.766198 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-76l7z" event={"ID":"e27175d1-38d4-4709-9d98-b71adc445f02","Type":"ContainerStarted","Data":"3bf6a61572d4223f551e3c5cdb691e15a339647324a035da8b7bdc4a29420c3e"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.766559 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qcjtl" event={"ID":"c75c1c18-27e6-4fae-bf58-03387b32e4f3","Type":"ContainerStarted","Data":"3f86ef07a7494f1abd21fe92e65d8b85df33dbdb056fb1b00b119996635e4f64"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.766629 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qdq6p" event={"ID":"4cd74a89-871d-499c-9362-d2ee8713147a","Type":"ContainerStarted","Data":"228079e89dc1372cfa4435fbee00985016bdda5232b901070c0d1d2349f8af7e"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.766697 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c384cd-721b-41fb-96c1-b493d2cf3497","Type":"ContainerStarted","Data":"ab5366a590248bfa859a9f2450b12303f7ad35f92403dc9f9f768ed104d133a7"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.766825 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-qvb92" event={"ID":"1ede6456-3d78-4ab5-8a0b-5c83a6e85a40","Type":"ContainerDied","Data":"b0b8367cdd971b65240e6c0216c24bda15ba43b49d300c9bf0f347f1c5efaa5d"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.767024 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2vbsv" event={"ID":"ece6b7c1-6647-4131-821c-889f5504b402","Type":"ContainerDied","Data":"2734d6eedffd573328c3ccc03dae27de8efca46fc947fca09d16b431ac82619c"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.767192 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" event={"ID":"eb18b58b-6d93-4ca4-b191-161234269f8b","Type":"ContainerDied","Data":"ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.767423 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r8cx9" event={"ID":"0f5a8379-377e-403c-a29b-cb80913e1ad9","Type":"ContainerStarted","Data":"e00e66fdc5dccc1f3ccad323476bb4612941de8aa1e1944aa48da880b61c8d4a"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.767585 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65ffd59b99-lqljn" event={"ID":"d133172a-0047-408d-8f55-270a9f6462ca","Type":"ContainerStarted","Data":"49164f5ff0017574529c1733e332025f3f2745e66dd8cdc19834fe386cb5e270"} Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.767172 5029 scope.go:117] "RemoveContainer" containerID="e1c99da4e22f48053658d223393c6b4bcbeb57a706ab09a205e5e2097466946e" Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.862531 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2vbsv"] Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.868416 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2vbsv"] Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.885876 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-qvb92"] Mar 13 20:48:28 crc kubenswrapper[5029]: I0313 20:48:28.885936 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-qvb92"] Mar 13 20:48:29 crc kubenswrapper[5029]: I0313 20:48:29.155243 5029 scope.go:117] "RemoveContainer" containerID="a22528a3261d14bad35a46228a83def5fc213c5f1e8cfb9213be72093f66b90f" Mar 13 20:48:29 crc kubenswrapper[5029]: I0313 20:48:29.256391 5029 scope.go:117] "RemoveContainer" containerID="cfdc9830108d545f11c808301fc4b6bb40cf341008d0898c38ecbe569f0e98d5" Mar 13 20:48:29 crc kubenswrapper[5029]: I0313 20:48:29.739737 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" event={"ID":"eb18b58b-6d93-4ca4-b191-161234269f8b","Type":"ContainerStarted","Data":"ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295"} Mar 13 20:48:29 crc kubenswrapper[5029]: I0313 20:48:29.740166 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:29 crc kubenswrapper[5029]: I0313 20:48:29.749886 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c384cd-721b-41fb-96c1-b493d2cf3497","Type":"ContainerStarted","Data":"c293b5137634f5cdb4e4f9d0179e2ff7c07fe904fde8ddf6446d1329b0dcef70"} Mar 13 20:48:29 crc kubenswrapper[5029]: I0313 20:48:29.790346 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" podStartSLOduration=4.790321897 podStartE2EDuration="4.790321897s" podCreationTimestamp="2026-03-13 20:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:48:29.764336248 +0000 UTC m=+1269.780418641" watchObservedRunningTime="2026-03-13 20:48:29.790321897 +0000 UTC m=+1269.806404300" Mar 13 20:48:30 crc kubenswrapper[5029]: I0313 20:48:30.638032 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" path="/var/lib/kubelet/pods/1ede6456-3d78-4ab5-8a0b-5c83a6e85a40/volumes" Mar 13 20:48:30 crc kubenswrapper[5029]: I0313 20:48:30.641164 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece6b7c1-6647-4131-821c-889f5504b402" path="/var/lib/kubelet/pods/ece6b7c1-6647-4131-821c-889f5504b402/volumes" Mar 13 20:48:30 crc kubenswrapper[5029]: I0313 20:48:30.881277 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c384cd-721b-41fb-96c1-b493d2cf3497","Type":"ContainerStarted","Data":"fadf38e4f74c9089641a055d7a9ae9d9e6b3387b803edb331bb4dc29e7c2754d"} Mar 13 20:48:30 crc kubenswrapper[5029]: I0313 20:48:30.881706 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerName="glance-log" containerID="cri-o://c293b5137634f5cdb4e4f9d0179e2ff7c07fe904fde8ddf6446d1329b0dcef70" gracePeriod=30 Mar 13 20:48:30 crc kubenswrapper[5029]: I0313 20:48:30.882162 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerName="glance-httpd" containerID="cri-o://fadf38e4f74c9089641a055d7a9ae9d9e6b3387b803edb331bb4dc29e7c2754d" gracePeriod=30 Mar 13 20:48:30 crc kubenswrapper[5029]: I0313 20:48:30.921958 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293","Type":"ContainerStarted","Data":"51f4cc4e59f8ec239dfd6e4424c8fd8b04c9690055da461e234524106688ae86"} Mar 13 20:48:30 crc kubenswrapper[5029]: I0313 20:48:30.939988 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.939959679 podStartE2EDuration="5.939959679s" podCreationTimestamp="2026-03-13 20:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:48:30.937182513 +0000 UTC m=+1270.953264936" watchObservedRunningTime="2026-03-13 20:48:30.939959679 +0000 UTC m=+1270.956042082" Mar 13 20:48:31 crc kubenswrapper[5029]: I0313 20:48:31.933329 5029 generic.go:334] "Generic (PLEG): container finished" podID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerID="fadf38e4f74c9089641a055d7a9ae9d9e6b3387b803edb331bb4dc29e7c2754d" exitCode=143 Mar 13 20:48:31 crc kubenswrapper[5029]: I0313 20:48:31.934066 5029 generic.go:334] "Generic (PLEG): container finished" podID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerID="c293b5137634f5cdb4e4f9d0179e2ff7c07fe904fde8ddf6446d1329b0dcef70" exitCode=143 Mar 13 20:48:31 crc kubenswrapper[5029]: I0313 20:48:31.933416 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c384cd-721b-41fb-96c1-b493d2cf3497","Type":"ContainerDied","Data":"fadf38e4f74c9089641a055d7a9ae9d9e6b3387b803edb331bb4dc29e7c2754d"} Mar 13 20:48:31 crc kubenswrapper[5029]: I0313 20:48:31.934204 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c384cd-721b-41fb-96c1-b493d2cf3497","Type":"ContainerDied","Data":"c293b5137634f5cdb4e4f9d0179e2ff7c07fe904fde8ddf6446d1329b0dcef70"} Mar 13 20:48:31 crc kubenswrapper[5029]: I0313 20:48:31.939940 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293","Type":"ContainerStarted","Data":"d6ac3e20d020f702fe8287ca43359638de8446604b7a4e2e454f01d628a83837"} Mar 13 20:48:31 crc kubenswrapper[5029]: I0313 20:48:31.940301 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerName="glance-log" containerID="cri-o://51f4cc4e59f8ec239dfd6e4424c8fd8b04c9690055da461e234524106688ae86" gracePeriod=30 Mar 13 20:48:31 crc kubenswrapper[5029]: I0313 20:48:31.940418 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerName="glance-httpd" containerID="cri-o://d6ac3e20d020f702fe8287ca43359638de8446604b7a4e2e454f01d628a83837" gracePeriod=30 Mar 13 20:48:31 crc kubenswrapper[5029]: I0313 20:48:31.962187 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.962169382 podStartE2EDuration="6.962169382s" podCreationTimestamp="2026-03-13 20:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:48:31.961047972 +0000 UTC m=+1271.977130395" watchObservedRunningTime="2026-03-13 20:48:31.962169382 +0000 UTC m=+1271.978251785" Mar 13 20:48:32 crc kubenswrapper[5029]: I0313 20:48:32.949717 5029 generic.go:334] "Generic (PLEG): container finished" podID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerID="d6ac3e20d020f702fe8287ca43359638de8446604b7a4e2e454f01d628a83837" exitCode=0 Mar 13 20:48:32 crc kubenswrapper[5029]: I0313 20:48:32.950037 5029 generic.go:334] "Generic (PLEG): container finished" podID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerID="51f4cc4e59f8ec239dfd6e4424c8fd8b04c9690055da461e234524106688ae86" exitCode=143 Mar 13 20:48:32 crc kubenswrapper[5029]: I0313 20:48:32.949759 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293","Type":"ContainerDied","Data":"d6ac3e20d020f702fe8287ca43359638de8446604b7a4e2e454f01d628a83837"} Mar 13 20:48:32 crc kubenswrapper[5029]: I0313 20:48:32.950112 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293","Type":"ContainerDied","Data":"51f4cc4e59f8ec239dfd6e4424c8fd8b04c9690055da461e234524106688ae86"} Mar 13 20:48:32 crc kubenswrapper[5029]: I0313 20:48:32.951535 5029 generic.go:334] "Generic (PLEG): container finished" podID="0f5a8379-377e-403c-a29b-cb80913e1ad9" containerID="e00e66fdc5dccc1f3ccad323476bb4612941de8aa1e1944aa48da880b61c8d4a" exitCode=0 Mar 13 20:48:32 crc kubenswrapper[5029]: I0313 20:48:32.951556 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r8cx9" event={"ID":"0f5a8379-377e-403c-a29b-cb80913e1ad9","Type":"ContainerDied","Data":"e00e66fdc5dccc1f3ccad323476bb4612941de8aa1e1944aa48da880b61c8d4a"} Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.179334 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85ff45d975-bg6kz"] Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.226916 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f6c6bfdcb-59kpl"] Mar 13 20:48:34 crc kubenswrapper[5029]: E0313 20:48:34.227421 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece6b7c1-6647-4131-821c-889f5504b402" containerName="init" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.227446 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece6b7c1-6647-4131-821c-889f5504b402" containerName="init" Mar 13 20:48:34 crc kubenswrapper[5029]: E0313 20:48:34.227480 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" containerName="dnsmasq-dns" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.227491 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" containerName="dnsmasq-dns" Mar 13 20:48:34 crc kubenswrapper[5029]: E0313 20:48:34.227507 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" containerName="init" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.227514 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" containerName="init" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.227771 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ede6456-3d78-4ab5-8a0b-5c83a6e85a40" containerName="dnsmasq-dns" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.227796 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece6b7c1-6647-4131-821c-889f5504b402" containerName="init" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.228900 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.237658 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.261097 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6c6bfdcb-59kpl"] Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.295829 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-combined-ca-bundle\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.295907 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-tls-certs\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.295974 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-secret-key\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.296018 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-logs\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.296056 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7s2l\" (UniqueName: \"kubernetes.io/projected/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-kube-api-access-l7s2l\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.296092 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-scripts\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.296159 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-config-data\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.320996 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65ffd59b99-lqljn"] Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.379095 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-674bcdb76-8wx84"] Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.380500 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.399500 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-secret-key\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.399557 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-logs\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.399592 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7s2l\" (UniqueName: \"kubernetes.io/projected/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-kube-api-access-l7s2l\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.399625 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-scripts\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.399676 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-config-data\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.399710 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-combined-ca-bundle\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.399731 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-tls-certs\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.406386 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-tls-certs\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.409565 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-674bcdb76-8wx84"] Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.409913 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-secret-key\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.410410 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-scripts\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.410644 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-logs\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.411108 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-config-data\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.415195 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-combined-ca-bundle\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.433235 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7s2l\" (UniqueName: \"kubernetes.io/projected/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-kube-api-access-l7s2l\") pod \"horizon-6f6c6bfdcb-59kpl\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.501776 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh4kl\" (UniqueName: \"kubernetes.io/projected/e88c424e-0503-40ac-9f24-5daa55912ff3-kube-api-access-fh4kl\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.501872 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e88c424e-0503-40ac-9f24-5daa55912ff3-config-data\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.501986 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e88c424e-0503-40ac-9f24-5daa55912ff3-logs\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.502021 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-horizon-secret-key\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.502072 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-horizon-tls-certs\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.502097 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-combined-ca-bundle\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.502124 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88c424e-0503-40ac-9f24-5daa55912ff3-scripts\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.564152 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.603723 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e88c424e-0503-40ac-9f24-5daa55912ff3-logs\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.603790 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-horizon-secret-key\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.603888 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-horizon-tls-certs\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.603916 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-combined-ca-bundle\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.603942 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88c424e-0503-40ac-9f24-5daa55912ff3-scripts\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.604003 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh4kl\" (UniqueName: \"kubernetes.io/projected/e88c424e-0503-40ac-9f24-5daa55912ff3-kube-api-access-fh4kl\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.604041 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e88c424e-0503-40ac-9f24-5daa55912ff3-config-data\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.605566 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e88c424e-0503-40ac-9f24-5daa55912ff3-config-data\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.605921 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e88c424e-0503-40ac-9f24-5daa55912ff3-logs\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.610783 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88c424e-0503-40ac-9f24-5daa55912ff3-scripts\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.610881 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-horizon-secret-key\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.614738 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-combined-ca-bundle\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.618631 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88c424e-0503-40ac-9f24-5daa55912ff3-horizon-tls-certs\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.626361 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh4kl\" (UniqueName: \"kubernetes.io/projected/e88c424e-0503-40ac-9f24-5daa55912ff3-kube-api-access-fh4kl\") pod \"horizon-674bcdb76-8wx84\" (UID: \"e88c424e-0503-40ac-9f24-5daa55912ff3\") " pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:34 crc kubenswrapper[5029]: I0313 20:48:34.823777 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:48:35 crc kubenswrapper[5029]: I0313 20:48:35.631465 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:48:35 crc kubenswrapper[5029]: I0313 20:48:35.641642 5029 scope.go:117] "RemoveContainer" containerID="30f8752bb39d0132715ce80cf200c4e9d200fe4eaabb4fcec56aa42ae4a33712" Mar 13 20:48:35 crc kubenswrapper[5029]: I0313 20:48:35.699075 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-82hr9"] Mar 13 20:48:35 crc kubenswrapper[5029]: I0313 20:48:35.699303 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="dnsmasq-dns" containerID="cri-o://507e8ceb54d87635e7785755378901449576508413f6afae8beabd95ed4fe085" gracePeriod=10 Mar 13 20:48:36 crc kubenswrapper[5029]: I0313 20:48:36.009575 5029 generic.go:334] "Generic (PLEG): container finished" podID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerID="507e8ceb54d87635e7785755378901449576508413f6afae8beabd95ed4fe085" exitCode=0 Mar 13 20:48:36 crc kubenswrapper[5029]: I0313 20:48:36.009626 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" event={"ID":"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3","Type":"ContainerDied","Data":"507e8ceb54d87635e7785755378901449576508413f6afae8beabd95ed4fe085"} Mar 13 20:48:38 crc kubenswrapper[5029]: I0313 20:48:38.888790 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Mar 13 20:48:43 crc kubenswrapper[5029]: I0313 20:48:43.888903 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Mar 13 20:48:44 crc kubenswrapper[5029]: E0313 20:48:44.601644 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Mar 13 20:48:44 crc kubenswrapper[5029]: E0313 20:48:44.602046 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h589h655h85h65ch665h5bh66ch5ddh646hcch5b5hbch66bh5c4h5cbh599h69hcfh589hfch545h57hd4h66bh54hd7hch59fh554h55ch646q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7fdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-85ff45d975-bg6kz_openstack(dd0573dc-2006-4faa-9286-c7743e50e702): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:48:44 crc kubenswrapper[5029]: E0313 20:48:44.623150 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-85ff45d975-bg6kz" podUID="dd0573dc-2006-4faa-9286-c7743e50e702" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.683928 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.814776 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-ceph\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.814918 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-scripts\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.814948 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-config-data\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.814985 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.815084 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-public-tls-certs\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.815118 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-httpd-run\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.815254 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-combined-ca-bundle\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.816388 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-logs\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.816451 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vjd\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-kube-api-access-75vjd\") pod \"00c384cd-721b-41fb-96c1-b493d2cf3497\" (UID: \"00c384cd-721b-41fb-96c1-b493d2cf3497\") " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.816638 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.817066 5029 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.817047 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-logs" (OuterVolumeSpecName: "logs") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.822070 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-ceph" (OuterVolumeSpecName: "ceph") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.822283 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-scripts" (OuterVolumeSpecName: "scripts") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.823210 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-kube-api-access-75vjd" (OuterVolumeSpecName: "kube-api-access-75vjd") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "kube-api-access-75vjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.823356 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.844627 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.875377 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-config-data" (OuterVolumeSpecName: "config-data") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.875754 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "00c384cd-721b-41fb-96c1-b493d2cf3497" (UID: "00c384cd-721b-41fb-96c1-b493d2cf3497"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.919573 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vjd\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-kube-api-access-75vjd\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.920752 5029 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/00c384cd-721b-41fb-96c1-b493d2cf3497-ceph\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.920844 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.920946 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.921033 5029 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.921093 5029 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.921172 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c384cd-721b-41fb-96c1-b493d2cf3497-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.921234 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c384cd-721b-41fb-96c1-b493d2cf3497-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:44 crc kubenswrapper[5029]: I0313 20:48:44.941652 5029 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.024103 5029 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.096755 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"00c384cd-721b-41fb-96c1-b493d2cf3497","Type":"ContainerDied","Data":"ab5366a590248bfa859a9f2450b12303f7ad35f92403dc9f9f768ed104d133a7"} Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.096882 5029 scope.go:117] "RemoveContainer" containerID="fadf38e4f74c9089641a055d7a9ae9d9e6b3387b803edb331bb4dc29e7c2754d" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.096984 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.155395 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.160028 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.185306 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:48:45 crc kubenswrapper[5029]: E0313 20:48:45.185658 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerName="glance-log" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.185674 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerName="glance-log" Mar 13 20:48:45 crc kubenswrapper[5029]: E0313 20:48:45.185698 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerName="glance-httpd" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.185704 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerName="glance-httpd" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.185892 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerName="glance-httpd" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.185909 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" containerName="glance-log" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.186829 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.188929 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.192657 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.209677 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.334517 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.334766 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.334800 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.335225 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.335562 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-logs\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.335667 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.335715 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.335741 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frj5x\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-kube-api-access-frj5x\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.336171 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.437893 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.437961 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.438194 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.438214 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.438238 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.438307 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-logs\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.438332 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.438351 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.438370 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frj5x\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-kube-api-access-frj5x\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.439060 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.439202 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.439299 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-logs\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.444632 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.445032 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.445349 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.445843 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.455831 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.456716 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frj5x\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-kube-api-access-frj5x\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.472268 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " pod="openstack/glance-default-external-api-0" Mar 13 20:48:45 crc kubenswrapper[5029]: I0313 20:48:45.517609 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:48:46 crc kubenswrapper[5029]: I0313 20:48:46.615539 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c384cd-721b-41fb-96c1-b493d2cf3497" path="/var/lib/kubelet/pods/00c384cd-721b-41fb-96c1-b493d2cf3497/volumes" Mar 13 20:48:47 crc kubenswrapper[5029]: E0313 20:48:47.768138 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Mar 13 20:48:47 crc kubenswrapper[5029]: E0313 20:48:47.769072 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67fh5cbh94h666h54fh5dhf6hfbh547h56bh75h5dbh56dh8dh675h88h74hb8hddh594h7dh84h5f5h669h674h644h557h59h5d8hc6h666hf6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xddd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65ffd59b99-lqljn_openstack(d133172a-0047-408d-8f55-270a9f6462ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:48:47 crc kubenswrapper[5029]: E0313 20:48:47.772965 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-65ffd59b99-lqljn" podUID="d133172a-0047-408d-8f55-270a9f6462ca" Mar 13 20:48:48 crc kubenswrapper[5029]: E0313 20:48:48.205207 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Mar 13 20:48:48 crc kubenswrapper[5029]: E0313 20:48:48.205603 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n76h74h567h565h7fh695h54bh6h5c5h567h68h5bhd4h5d4h656h57fh644h57fhch588h678h54ch649h696hf9h5dh55fh54dhcch676h95h64cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpbb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(160773c1-ebe6-4b3b-b26d-5745cbf9ef70): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.308753 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.401560 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-scripts\") pod \"0f5a8379-377e-403c-a29b-cb80913e1ad9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.401636 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-config-data\") pod \"0f5a8379-377e-403c-a29b-cb80913e1ad9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.401703 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-combined-ca-bundle\") pod \"0f5a8379-377e-403c-a29b-cb80913e1ad9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.401919 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct5m8\" (UniqueName: \"kubernetes.io/projected/0f5a8379-377e-403c-a29b-cb80913e1ad9-kube-api-access-ct5m8\") pod \"0f5a8379-377e-403c-a29b-cb80913e1ad9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.401992 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-credential-keys\") pod \"0f5a8379-377e-403c-a29b-cb80913e1ad9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.402119 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-fernet-keys\") pod \"0f5a8379-377e-403c-a29b-cb80913e1ad9\" (UID: \"0f5a8379-377e-403c-a29b-cb80913e1ad9\") " Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.411574 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-scripts" (OuterVolumeSpecName: "scripts") pod "0f5a8379-377e-403c-a29b-cb80913e1ad9" (UID: "0f5a8379-377e-403c-a29b-cb80913e1ad9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.412002 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0f5a8379-377e-403c-a29b-cb80913e1ad9" (UID: "0f5a8379-377e-403c-a29b-cb80913e1ad9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.421204 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0f5a8379-377e-403c-a29b-cb80913e1ad9" (UID: "0f5a8379-377e-403c-a29b-cb80913e1ad9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.422905 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5a8379-377e-403c-a29b-cb80913e1ad9-kube-api-access-ct5m8" (OuterVolumeSpecName: "kube-api-access-ct5m8") pod "0f5a8379-377e-403c-a29b-cb80913e1ad9" (UID: "0f5a8379-377e-403c-a29b-cb80913e1ad9"). InnerVolumeSpecName "kube-api-access-ct5m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.454906 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f5a8379-377e-403c-a29b-cb80913e1ad9" (UID: "0f5a8379-377e-403c-a29b-cb80913e1ad9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.461118 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-config-data" (OuterVolumeSpecName: "config-data") pod "0f5a8379-377e-403c-a29b-cb80913e1ad9" (UID: "0f5a8379-377e-403c-a29b-cb80913e1ad9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.504865 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct5m8\" (UniqueName: \"kubernetes.io/projected/0f5a8379-377e-403c-a29b-cb80913e1ad9-kube-api-access-ct5m8\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.504946 5029 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.504963 5029 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.504977 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.504996 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:48 crc kubenswrapper[5029]: I0313 20:48:48.505009 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5a8379-377e-403c-a29b-cb80913e1ad9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.136021 5029 generic.go:334] "Generic (PLEG): container finished" podID="4cd74a89-871d-499c-9362-d2ee8713147a" containerID="228079e89dc1372cfa4435fbee00985016bdda5232b901070c0d1d2349f8af7e" exitCode=0 Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.136292 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qdq6p" event={"ID":"4cd74a89-871d-499c-9362-d2ee8713147a","Type":"ContainerDied","Data":"228079e89dc1372cfa4435fbee00985016bdda5232b901070c0d1d2349f8af7e"} Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.138152 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r8cx9" event={"ID":"0f5a8379-377e-403c-a29b-cb80913e1ad9","Type":"ContainerDied","Data":"dc135bb077fcebcafa2832cd30c4a08d51d244f5d67717a81faed85a4cd86d2c"} Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.138194 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc135bb077fcebcafa2832cd30c4a08d51d244f5d67717a81faed85a4cd86d2c" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.138335 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r8cx9" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.396759 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r8cx9"] Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.404378 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r8cx9"] Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.528062 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xmjp6"] Mar 13 20:48:49 crc kubenswrapper[5029]: E0313 20:48:49.529286 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5a8379-377e-403c-a29b-cb80913e1ad9" containerName="keystone-bootstrap" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.529313 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5a8379-377e-403c-a29b-cb80913e1ad9" containerName="keystone-bootstrap" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.533470 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5a8379-377e-403c-a29b-cb80913e1ad9" containerName="keystone-bootstrap" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.534737 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.537274 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.537437 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.537592 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.537606 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qpzzs" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.540350 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.541893 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xmjp6"] Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.626254 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-fernet-keys\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.626386 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-combined-ca-bundle\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.626497 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8t5m\" (UniqueName: \"kubernetes.io/projected/c020ac40-202f-4f46-b658-f1cce4d0ad1d-kube-api-access-p8t5m\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.626560 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-scripts\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.626584 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-config-data\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.626602 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-credential-keys\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.728256 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-scripts\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.728315 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-config-data\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.728353 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-credential-keys\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.728378 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-fernet-keys\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.728443 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-combined-ca-bundle\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.728565 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8t5m\" (UniqueName: \"kubernetes.io/projected/c020ac40-202f-4f46-b658-f1cce4d0ad1d-kube-api-access-p8t5m\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.744239 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-scripts\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.744552 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-combined-ca-bundle\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.744825 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-config-data\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.746940 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-credential-keys\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.747743 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-fernet-keys\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.753748 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8t5m\" (UniqueName: \"kubernetes.io/projected/c020ac40-202f-4f46-b658-f1cce4d0ad1d-kube-api-access-p8t5m\") pod \"keystone-bootstrap-xmjp6\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:49 crc kubenswrapper[5029]: I0313 20:48:49.865514 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:48:50 crc kubenswrapper[5029]: I0313 20:48:50.614447 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5a8379-377e-403c-a29b-cb80913e1ad9" path="/var/lib/kubelet/pods/0f5a8379-377e-403c-a29b-cb80913e1ad9/volumes" Mar 13 20:48:53 crc kubenswrapper[5029]: I0313 20:48:53.889021 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Mar 13 20:48:53 crc kubenswrapper[5029]: I0313 20:48:53.889680 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.343563 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.344459 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gd5cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h5hp5_openstack(a5243e50-28ff-4f5c-aeb1-97a87b1f2765): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.345914 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h5hp5" podUID="a5243e50-28ff-4f5c-aeb1-97a87b1f2765" Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.352134 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.352336 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfch8bh564h56dh5f5h59ch8h698hc5h674h84hb6h5d4h5cch698h675h558h55bhbh55fhbbh5c5h87hfhfbh546h586h646h55dh59dh5f9h5c6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnqwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-75cff898d9-qm9m6_openstack(ab2fa20b-b10c-4818-b493-705c299a1982): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.354492 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-75cff898d9-qm9m6" podUID="ab2fa20b-b10c-4818-b493-705c299a1982" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.466184 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.622488 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-config-data\") pod \"dd0573dc-2006-4faa-9286-c7743e50e702\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.623110 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-scripts\") pod \"dd0573dc-2006-4faa-9286-c7743e50e702\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.623726 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-scripts" (OuterVolumeSpecName: "scripts") pod "dd0573dc-2006-4faa-9286-c7743e50e702" (UID: "dd0573dc-2006-4faa-9286-c7743e50e702"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.623842 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-config-data" (OuterVolumeSpecName: "config-data") pod "dd0573dc-2006-4faa-9286-c7743e50e702" (UID: "dd0573dc-2006-4faa-9286-c7743e50e702"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.623899 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd0573dc-2006-4faa-9286-c7743e50e702-horizon-secret-key\") pod \"dd0573dc-2006-4faa-9286-c7743e50e702\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.624107 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0573dc-2006-4faa-9286-c7743e50e702-logs\") pod \"dd0573dc-2006-4faa-9286-c7743e50e702\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.624219 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7fdc\" (UniqueName: \"kubernetes.io/projected/dd0573dc-2006-4faa-9286-c7743e50e702-kube-api-access-d7fdc\") pod \"dd0573dc-2006-4faa-9286-c7743e50e702\" (UID: \"dd0573dc-2006-4faa-9286-c7743e50e702\") " Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.624812 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0573dc-2006-4faa-9286-c7743e50e702-logs" (OuterVolumeSpecName: "logs") pod "dd0573dc-2006-4faa-9286-c7743e50e702" (UID: "dd0573dc-2006-4faa-9286-c7743e50e702"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.625524 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.625548 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd0573dc-2006-4faa-9286-c7743e50e702-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.625561 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0573dc-2006-4faa-9286-c7743e50e702-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.628646 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0573dc-2006-4faa-9286-c7743e50e702-kube-api-access-d7fdc" (OuterVolumeSpecName: "kube-api-access-d7fdc") pod "dd0573dc-2006-4faa-9286-c7743e50e702" (UID: "dd0573dc-2006-4faa-9286-c7743e50e702"). InnerVolumeSpecName "kube-api-access-d7fdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.629242 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0573dc-2006-4faa-9286-c7743e50e702-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dd0573dc-2006-4faa-9286-c7743e50e702" (UID: "dd0573dc-2006-4faa-9286-c7743e50e702"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.717655 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.717710 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.727152 5029 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd0573dc-2006-4faa-9286-c7743e50e702-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.727182 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7fdc\" (UniqueName: \"kubernetes.io/projected/dd0573dc-2006-4faa-9286-c7743e50e702-kube-api-access-d7fdc\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:56 crc kubenswrapper[5029]: I0313 20:48:56.799733 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-674bcdb76-8wx84"] Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.912300 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-manila-api:current-podified" Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.912450 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manila-db-sync,Image:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,Command:[/bin/bash],Args:[-c sleep 0 && /usr/bin/manila-manage --config-dir /etc/manila/manila.conf.d db sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:job-config-data,ReadOnly:true,MountPath:/etc/manila/manila.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjb78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42429,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42429,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-db-sync-76l7z_openstack(e27175d1-38d4-4709-9d98-b71adc445f02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:48:56 crc kubenswrapper[5029]: E0313 20:48:56.914861 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/manila-db-sync-76l7z" podUID="e27175d1-38d4-4709-9d98-b71adc445f02" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.016558 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.027531 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032378 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d133172a-0047-408d-8f55-270a9f6462ca-logs\") pod \"d133172a-0047-408d-8f55-270a9f6462ca\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032560 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-httpd-run\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032589 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz5ng\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-kube-api-access-gz5ng\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032616 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d133172a-0047-408d-8f55-270a9f6462ca-horizon-secret-key\") pod \"d133172a-0047-408d-8f55-270a9f6462ca\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032650 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-logs\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032685 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-scripts\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032703 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-scripts\") pod \"d133172a-0047-408d-8f55-270a9f6462ca\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032728 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-internal-tls-certs\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032749 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-config-data\") pod \"d133172a-0047-408d-8f55-270a9f6462ca\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032780 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-ceph\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032794 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-combined-ca-bundle\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032830 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xddd9\" (UniqueName: \"kubernetes.io/projected/d133172a-0047-408d-8f55-270a9f6462ca-kube-api-access-xddd9\") pod \"d133172a-0047-408d-8f55-270a9f6462ca\" (UID: \"d133172a-0047-408d-8f55-270a9f6462ca\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032843 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-config-data\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032890 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\" (UID: \"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.035219 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.032725 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d133172a-0047-408d-8f55-270a9f6462ca-logs" (OuterVolumeSpecName: "logs") pod "d133172a-0047-408d-8f55-270a9f6462ca" (UID: "d133172a-0047-408d-8f55-270a9f6462ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.033754 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-logs" (OuterVolumeSpecName: "logs") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.033881 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.046288 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-ceph" (OuterVolumeSpecName: "ceph") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.046700 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-config-data" (OuterVolumeSpecName: "config-data") pod "d133172a-0047-408d-8f55-270a9f6462ca" (UID: "d133172a-0047-408d-8f55-270a9f6462ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.047006 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-scripts" (OuterVolumeSpecName: "scripts") pod "d133172a-0047-408d-8f55-270a9f6462ca" (UID: "d133172a-0047-408d-8f55-270a9f6462ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.056405 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d133172a-0047-408d-8f55-270a9f6462ca-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d133172a-0047-408d-8f55-270a9f6462ca" (UID: "d133172a-0047-408d-8f55-270a9f6462ca"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.062390 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d133172a-0047-408d-8f55-270a9f6462ca-kube-api-access-xddd9" (OuterVolumeSpecName: "kube-api-access-xddd9") pod "d133172a-0047-408d-8f55-270a9f6462ca" (UID: "d133172a-0047-408d-8f55-270a9f6462ca"). InnerVolumeSpecName "kube-api-access-xddd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.064179 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.064463 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.069038 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-scripts" (OuterVolumeSpecName: "scripts") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.069648 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-kube-api-access-gz5ng" (OuterVolumeSpecName: "kube-api-access-gz5ng") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "kube-api-access-gz5ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.123051 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.124229 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.134114 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-sb\") pod \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.134180 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-combined-ca-bundle\") pod \"4cd74a89-871d-499c-9362-d2ee8713147a\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.134229 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-config\") pod \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.134338 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-config\") pod \"4cd74a89-871d-499c-9362-d2ee8713147a\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.134390 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-dns-svc\") pod \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.134517 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-nb\") pod \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.134612 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skjwt\" (UniqueName: \"kubernetes.io/projected/4cd74a89-871d-499c-9362-d2ee8713147a-kube-api-access-skjwt\") pod \"4cd74a89-871d-499c-9362-d2ee8713147a\" (UID: \"4cd74a89-871d-499c-9362-d2ee8713147a\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.134714 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trrqj\" (UniqueName: \"kubernetes.io/projected/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-kube-api-access-trrqj\") pod \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\" (UID: \"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3\") " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135455 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135482 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135494 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135504 5029 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135514 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d133172a-0047-408d-8f55-270a9f6462ca-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135523 5029 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-ceph\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135533 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135542 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xddd9\" (UniqueName: \"kubernetes.io/projected/d133172a-0047-408d-8f55-270a9f6462ca-kube-api-access-xddd9\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135565 5029 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135576 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d133172a-0047-408d-8f55-270a9f6462ca-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135588 5029 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135600 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz5ng\" (UniqueName: \"kubernetes.io/projected/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-kube-api-access-gz5ng\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.135611 5029 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d133172a-0047-408d-8f55-270a9f6462ca-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.142625 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd74a89-871d-499c-9362-d2ee8713147a-kube-api-access-skjwt" (OuterVolumeSpecName: "kube-api-access-skjwt") pod "4cd74a89-871d-499c-9362-d2ee8713147a" (UID: "4cd74a89-871d-499c-9362-d2ee8713147a"). InnerVolumeSpecName "kube-api-access-skjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.155333 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-kube-api-access-trrqj" (OuterVolumeSpecName: "kube-api-access-trrqj") pod "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" (UID: "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3"). InnerVolumeSpecName "kube-api-access-trrqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.166114 5029 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.168939 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cd74a89-871d-499c-9362-d2ee8713147a" (UID: "4cd74a89-871d-499c-9362-d2ee8713147a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.171123 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-config-data" (OuterVolumeSpecName: "config-data") pod "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" (UID: "a80ec239-5d0d-48ed-88dc-f7dc1f8ab293"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.176984 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-config" (OuterVolumeSpecName: "config") pod "4cd74a89-871d-499c-9362-d2ee8713147a" (UID: "4cd74a89-871d-499c-9362-d2ee8713147a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.187009 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" (UID: "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.188633 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" (UID: "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.208730 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-config" (OuterVolumeSpecName: "config") pod "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" (UID: "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.210237 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" (UID: "af06cd5d-f17a-417e-8c5e-1087f6c2eaa3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.216168 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85ff45d975-bg6kz" event={"ID":"dd0573dc-2006-4faa-9286-c7743e50e702","Type":"ContainerDied","Data":"58ff262db35a5aaa99b307c05af52eef41a99f93cd312f7694c2d0478d4dfd42"} Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.216278 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85ff45d975-bg6kz" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.222088 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qdq6p" event={"ID":"4cd74a89-871d-499c-9362-d2ee8713147a","Type":"ContainerDied","Data":"04ee3fea4ffc1549f184daf107e1fff3687ed4e0851c111aff0ced667c1cfbef"} Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.222135 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ee3fea4ffc1549f184daf107e1fff3687ed4e0851c111aff0ced667c1cfbef" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.222096 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qdq6p" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.225068 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80ec239-5d0d-48ed-88dc-f7dc1f8ab293","Type":"ContainerDied","Data":"5c94e0cb2dc7e450d27d63ed5cd65ec7e00e0d53042419f19cfce092a63a9914"} Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.225211 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.228456 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65ffd59b99-lqljn" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.228453 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65ffd59b99-lqljn" event={"ID":"d133172a-0047-408d-8f55-270a9f6462ca","Type":"ContainerDied","Data":"49164f5ff0017574529c1733e332025f3f2745e66dd8cdc19834fe386cb5e270"} Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.230961 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.232662 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" event={"ID":"af06cd5d-f17a-417e-8c5e-1087f6c2eaa3","Type":"ContainerDied","Data":"5bd57658c22d7895283a718211ebf118f982a4dd5279271d27a371e5221b86f0"} Mar 13 20:48:57 crc kubenswrapper[5029]: E0313 20:48:57.235918 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-h5hp5" podUID="a5243e50-28ff-4f5c-aeb1-97a87b1f2765" Mar 13 20:48:57 crc kubenswrapper[5029]: E0313 20:48:57.235922 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-manila-api:current-podified\\\"\"" pod="openstack/manila-db-sync-76l7z" podUID="e27175d1-38d4-4709-9d98-b71adc445f02" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245391 5029 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245437 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skjwt\" (UniqueName: \"kubernetes.io/projected/4cd74a89-871d-499c-9362-d2ee8713147a-kube-api-access-skjwt\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245450 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trrqj\" (UniqueName: \"kubernetes.io/projected/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-kube-api-access-trrqj\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245460 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245514 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245525 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245535 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cd74a89-871d-499c-9362-d2ee8713147a-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245543 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245642 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.245653 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.344930 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.363472 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.383434 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:48:57 crc kubenswrapper[5029]: E0313 20:48:57.386943 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerName="glance-httpd" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.386979 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerName="glance-httpd" Mar 13 20:48:57 crc kubenswrapper[5029]: E0313 20:48:57.386991 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd74a89-871d-499c-9362-d2ee8713147a" containerName="neutron-db-sync" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.387000 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd74a89-871d-499c-9362-d2ee8713147a" containerName="neutron-db-sync" Mar 13 20:48:57 crc kubenswrapper[5029]: E0313 20:48:57.387025 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="dnsmasq-dns" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.387034 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="dnsmasq-dns" Mar 13 20:48:57 crc kubenswrapper[5029]: E0313 20:48:57.387060 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="init" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.387068 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="init" Mar 13 20:48:57 crc kubenswrapper[5029]: E0313 20:48:57.387099 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerName="glance-log" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.387108 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerName="glance-log" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.387445 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerName="glance-log" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.387464 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="dnsmasq-dns" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.387482 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" containerName="glance-httpd" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.387506 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd74a89-871d-499c-9362-d2ee8713147a" containerName="neutron-db-sync" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.388904 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.392071 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.395067 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.435349 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85ff45d975-bg6kz"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.452282 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.454190 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.454253 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.454282 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-logs\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.454723 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.454867 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.454936 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.455046 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-ceph\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.455157 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdxz\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-kube-api-access-sfdxz\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.455314 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.475234 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85ff45d975-bg6kz"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.514775 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65ffd59b99-lqljn"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.525573 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65ffd59b99-lqljn"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.538284 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-82hr9"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.557913 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.558031 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.558080 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-ceph\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.558122 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdxz\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-kube-api-access-sfdxz\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.558173 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.558234 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.558253 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.558275 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-logs\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.558316 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.561581 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.562297 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.562545 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-logs\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.562531 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-82hr9"] Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.575189 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.575432 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-ceph\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.578424 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.581583 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdxz\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-kube-api-access-sfdxz\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.596022 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.605235 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.654364 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:48:57 crc kubenswrapper[5029]: I0313 20:48:57.744067 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.400834 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7ht2z"] Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.403088 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.442625 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7ht2z"] Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.479839 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.480102 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.480715 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-config\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.480949 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hf4z\" (UniqueName: \"kubernetes.io/projected/6e726c0a-09e0-46c4-870f-440581c3af6e-kube-api-access-8hf4z\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.481316 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.481383 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.579235 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7cf8f459d4-bj2jk"] Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.584823 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.590761 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.591006 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.591145 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.591364 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.591767 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-config\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.591890 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hf4z\" (UniqueName: \"kubernetes.io/projected/6e726c0a-09e0-46c4-870f-440581c3af6e-kube-api-access-8hf4z\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.592100 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cf8f459d4-bj2jk"] Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.592645 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.591891 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.593041 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.599331 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.600393 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-config\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.601399 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qr96x" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.601989 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.602118 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.602371 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.657407 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hf4z\" (UniqueName: \"kubernetes.io/projected/6e726c0a-09e0-46c4-870f-440581c3af6e-kube-api-access-8hf4z\") pod \"dnsmasq-dns-55f844cf75-7ht2z\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.661282 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80ec239-5d0d-48ed-88dc-f7dc1f8ab293" path="/var/lib/kubelet/pods/a80ec239-5d0d-48ed-88dc-f7dc1f8ab293/volumes" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.662659 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" path="/var/lib/kubelet/pods/af06cd5d-f17a-417e-8c5e-1087f6c2eaa3/volumes" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.663524 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d133172a-0047-408d-8f55-270a9f6462ca" path="/var/lib/kubelet/pods/d133172a-0047-408d-8f55-270a9f6462ca/volumes" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.665494 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0573dc-2006-4faa-9286-c7743e50e702" path="/var/lib/kubelet/pods/dd0573dc-2006-4faa-9286-c7743e50e702/volumes" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.699281 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-ovndb-tls-certs\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.699365 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqhf8\" (UniqueName: \"kubernetes.io/projected/da8a5250-75de-4986-ab96-2415b667cac1-kube-api-access-mqhf8\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.699436 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-combined-ca-bundle\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.699477 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-httpd-config\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.699576 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-config\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.736521 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.801385 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-ovndb-tls-certs\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.801461 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhf8\" (UniqueName: \"kubernetes.io/projected/da8a5250-75de-4986-ab96-2415b667cac1-kube-api-access-mqhf8\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.801527 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-combined-ca-bundle\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.801562 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-httpd-config\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.801617 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-config\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.806079 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-config\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.808275 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-ovndb-tls-certs\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.810955 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-combined-ca-bundle\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.817774 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-httpd-config\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.822340 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhf8\" (UniqueName: \"kubernetes.io/projected/da8a5250-75de-4986-ab96-2415b667cac1-kube-api-access-mqhf8\") pod \"neutron-7cf8f459d4-bj2jk\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.890000 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-82hr9" podUID="af06cd5d-f17a-417e-8c5e-1087f6c2eaa3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Mar 13 20:48:58 crc kubenswrapper[5029]: I0313 20:48:58.914130 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.654931 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85f64689c7-r5skz"] Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.658413 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.664736 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.664879 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.679223 5029 scope.go:117] "RemoveContainer" containerID="c293b5137634f5cdb4e4f9d0179e2ff7c07fe904fde8ddf6446d1329b0dcef70" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.684871 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85f64689c7-r5skz"] Mar 13 20:49:00 crc kubenswrapper[5029]: E0313 20:49:00.697297 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 13 20:49:00 crc kubenswrapper[5029]: E0313 20:49:00.697533 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9plv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xhhzb_openstack(e5a13c03-b012-4416-bb5b-3ff21417290a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 20:49:00 crc kubenswrapper[5029]: E0313 20:49:00.699179 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xhhzb" podUID="e5a13c03-b012-4416-bb5b-3ff21417290a" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.741968 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvx47\" (UniqueName: \"kubernetes.io/projected/bb83b759-9e8e-4e99-8193-f8dbf847f440-kube-api-access-kvx47\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.742022 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-ovndb-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.742072 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-httpd-config\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.742107 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-config\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.742177 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-combined-ca-bundle\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.742205 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-internal-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.742241 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-public-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.851754 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvx47\" (UniqueName: \"kubernetes.io/projected/bb83b759-9e8e-4e99-8193-f8dbf847f440-kube-api-access-kvx47\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.852345 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-ovndb-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.852465 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-httpd-config\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.852516 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-config\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.852616 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-combined-ca-bundle\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.852658 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-internal-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.852742 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-public-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.873167 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-httpd-config\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.875136 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-public-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.875359 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-internal-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.884245 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-ovndb-tls-certs\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.891911 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvx47\" (UniqueName: \"kubernetes.io/projected/bb83b759-9e8e-4e99-8193-f8dbf847f440-kube-api-access-kvx47\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.896884 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-config\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.897130 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-combined-ca-bundle\") pod \"neutron-85f64689c7-r5skz\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.919692 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.923345 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.959788 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2fa20b-b10c-4818-b493-705c299a1982-logs\") pod \"ab2fa20b-b10c-4818-b493-705c299a1982\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.959940 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-scripts\") pod \"ab2fa20b-b10c-4818-b493-705c299a1982\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.960010 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2fa20b-b10c-4818-b493-705c299a1982-horizon-secret-key\") pod \"ab2fa20b-b10c-4818-b493-705c299a1982\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.960075 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnqwh\" (UniqueName: \"kubernetes.io/projected/ab2fa20b-b10c-4818-b493-705c299a1982-kube-api-access-rnqwh\") pod \"ab2fa20b-b10c-4818-b493-705c299a1982\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.960171 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-config-data\") pod \"ab2fa20b-b10c-4818-b493-705c299a1982\" (UID: \"ab2fa20b-b10c-4818-b493-705c299a1982\") " Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.960173 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2fa20b-b10c-4818-b493-705c299a1982-logs" (OuterVolumeSpecName: "logs") pod "ab2fa20b-b10c-4818-b493-705c299a1982" (UID: "ab2fa20b-b10c-4818-b493-705c299a1982"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.960695 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2fa20b-b10c-4818-b493-705c299a1982-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.963751 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-scripts" (OuterVolumeSpecName: "scripts") pod "ab2fa20b-b10c-4818-b493-705c299a1982" (UID: "ab2fa20b-b10c-4818-b493-705c299a1982"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.964195 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-config-data" (OuterVolumeSpecName: "config-data") pod "ab2fa20b-b10c-4818-b493-705c299a1982" (UID: "ab2fa20b-b10c-4818-b493-705c299a1982"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.966459 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2fa20b-b10c-4818-b493-705c299a1982-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ab2fa20b-b10c-4818-b493-705c299a1982" (UID: "ab2fa20b-b10c-4818-b493-705c299a1982"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:00 crc kubenswrapper[5029]: I0313 20:49:00.967154 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2fa20b-b10c-4818-b493-705c299a1982-kube-api-access-rnqwh" (OuterVolumeSpecName: "kube-api-access-rnqwh") pod "ab2fa20b-b10c-4818-b493-705c299a1982" (UID: "ab2fa20b-b10c-4818-b493-705c299a1982"). InnerVolumeSpecName "kube-api-access-rnqwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.062411 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.062455 5029 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2fa20b-b10c-4818-b493-705c299a1982-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.062473 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnqwh\" (UniqueName: \"kubernetes.io/projected/ab2fa20b-b10c-4818-b493-705c299a1982-kube-api-access-rnqwh\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.062486 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2fa20b-b10c-4818-b493-705c299a1982-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.167686 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6c6bfdcb-59kpl"] Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.270305 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cff898d9-qm9m6" event={"ID":"ab2fa20b-b10c-4818-b493-705c299a1982","Type":"ContainerDied","Data":"216a5b4d58bef2db71fca4b28e377fab251231b15ce563431b7f64ebbc018210"} Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.270393 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cff898d9-qm9m6" Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.280955 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-674bcdb76-8wx84" event={"ID":"e88c424e-0503-40ac-9f24-5daa55912ff3","Type":"ContainerStarted","Data":"b483f32c9d74231849518837de99e4fc6bc885f9e22387550c8ba5627c2f5e1b"} Mar 13 20:49:01 crc kubenswrapper[5029]: E0313 20:49:01.285190 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-xhhzb" podUID="e5a13c03-b012-4416-bb5b-3ff21417290a" Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.365764 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75cff898d9-qm9m6"] Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.374604 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75cff898d9-qm9m6"] Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.512492 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.575308 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xmjp6"] Mar 13 20:49:01 crc kubenswrapper[5029]: W0313 20:49:01.578099 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53004b20_47d0_461d_b054_fb52f7a78770.slice/crio-4be6bdb4e84dc9cdd1fca69ad7f61e7582251a07149c1b5da82de5bc929830a6 WatchSource:0}: Error finding container 4be6bdb4e84dc9cdd1fca69ad7f61e7582251a07149c1b5da82de5bc929830a6: Status 404 returned error can't find the container with id 4be6bdb4e84dc9cdd1fca69ad7f61e7582251a07149c1b5da82de5bc929830a6 Mar 13 20:49:01 crc kubenswrapper[5029]: W0313 20:49:01.579140 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc020ac40_202f_4f46_b658_f1cce4d0ad1d.slice/crio-2c4624736c5a2529967cf5338e157ec829dbe6e40d8ca15b3f90dd828e63dcbf WatchSource:0}: Error finding container 2c4624736c5a2529967cf5338e157ec829dbe6e40d8ca15b3f90dd828e63dcbf: Status 404 returned error can't find the container with id 2c4624736c5a2529967cf5338e157ec829dbe6e40d8ca15b3f90dd828e63dcbf Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.802360 5029 scope.go:117] "RemoveContainer" containerID="d6ac3e20d020f702fe8287ca43359638de8446604b7a4e2e454f01d628a83837" Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.948207 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7ht2z"] Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.950926 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:49:01 crc kubenswrapper[5029]: I0313 20:49:01.950990 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.134788 5029 scope.go:117] "RemoveContainer" containerID="51f4cc4e59f8ec239dfd6e4424c8fd8b04c9690055da461e234524106688ae86" Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.193780 5029 scope.go:117] "RemoveContainer" containerID="507e8ceb54d87635e7785755378901449576508413f6afae8beabd95ed4fe085" Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.341318 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xmjp6" event={"ID":"c020ac40-202f-4f46-b658-f1cce4d0ad1d","Type":"ContainerStarted","Data":"eb363144f5c395245f0b4d6a0c350450122f7f56b89bdb7dd188585b3c859838"} Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.341717 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xmjp6" event={"ID":"c020ac40-202f-4f46-b658-f1cce4d0ad1d","Type":"ContainerStarted","Data":"2c4624736c5a2529967cf5338e157ec829dbe6e40d8ca15b3f90dd828e63dcbf"} Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.355578 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" event={"ID":"6e726c0a-09e0-46c4-870f-440581c3af6e","Type":"ContainerStarted","Data":"53da69bc56b52352fd5e0381a0ee56ecdd3d74227d1adf560a164f1fa0099543"} Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.357109 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.367383 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53004b20-47d0-461d-b054-fb52f7a78770","Type":"ContainerStarted","Data":"4be6bdb4e84dc9cdd1fca69ad7f61e7582251a07149c1b5da82de5bc929830a6"} Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.369721 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xmjp6" podStartSLOduration=13.369696885 podStartE2EDuration="13.369696885s" podCreationTimestamp="2026-03-13 20:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:02.362416886 +0000 UTC m=+1302.378499309" watchObservedRunningTime="2026-03-13 20:49:02.369696885 +0000 UTC m=+1302.385779288" Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.373231 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6c6bfdcb-59kpl" event={"ID":"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9","Type":"ContainerStarted","Data":"f106ebf69c7299c21c5c7753bbc2818c14a5316ad6d639ccf592a076187cb946"} Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.377687 5029 scope.go:117] "RemoveContainer" containerID="39e96f71e696d13c55a0f76116a7a6ec474171ee48ec48e7992466cc9c087790" Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.392635 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qcjtl" event={"ID":"c75c1c18-27e6-4fae-bf58-03387b32e4f3","Type":"ContainerStarted","Data":"9bf8aca3ecad38f35de125da898a0c0e6af8f9f4a3e0c8b68486ea77d62d5a98"} Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.424933 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qcjtl" podStartSLOduration=8.541995613 podStartE2EDuration="37.424906762s" podCreationTimestamp="2026-03-13 20:48:25 +0000 UTC" firstStartedPulling="2026-03-13 20:48:27.455254667 +0000 UTC m=+1267.471337070" lastFinishedPulling="2026-03-13 20:48:56.338165806 +0000 UTC m=+1296.354248219" observedRunningTime="2026-03-13 20:49:02.419184255 +0000 UTC m=+1302.435266658" watchObservedRunningTime="2026-03-13 20:49:02.424906762 +0000 UTC m=+1302.440989165" Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.491359 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cf8f459d4-bj2jk"] Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.623677 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2fa20b-b10c-4818-b493-705c299a1982" path="/var/lib/kubelet/pods/ab2fa20b-b10c-4818-b493-705c299a1982/volumes" Mar 13 20:49:02 crc kubenswrapper[5029]: I0313 20:49:02.645132 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85f64689c7-r5skz"] Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.465611 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" event={"ID":"6e726c0a-09e0-46c4-870f-440581c3af6e","Type":"ContainerDied","Data":"1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.465577 5029 generic.go:334] "Generic (PLEG): container finished" podID="6e726c0a-09e0-46c4-870f-440581c3af6e" containerID="1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa" exitCode=0 Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.475296 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-674bcdb76-8wx84" event={"ID":"e88c424e-0503-40ac-9f24-5daa55912ff3","Type":"ContainerStarted","Data":"db3611eac25048690c16bc22cdaff5ef0be5919d5ba9b31eb4f4c79cca5e2fcd"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.481525 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6c6bfdcb-59kpl" event={"ID":"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9","Type":"ContainerStarted","Data":"a6ab2709590ed237e109db82ee33f472cd645e5b45627f0cfb90ef7afafbc2dc"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.494078 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf8f459d4-bj2jk" event={"ID":"da8a5250-75de-4986-ab96-2415b667cac1","Type":"ContainerStarted","Data":"3c0571ae25d9f6ddcd432dbf2e81e8055f5c32434c78aea3186d589972cd419a"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.494145 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf8f459d4-bj2jk" event={"ID":"da8a5250-75de-4986-ab96-2415b667cac1","Type":"ContainerStarted","Data":"117c343948f7843f948c57b22ae398dec5b30e91c41cfabaafd081a87609765f"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.498412 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa59f852-51b9-4576-9935-401acd4199bf","Type":"ContainerStarted","Data":"d3273d3c671238de4e406034553bc9f6128cfd306673207f95f191b0df7f0026"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.521054 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53004b20-47d0-461d-b054-fb52f7a78770","Type":"ContainerStarted","Data":"c4af649cbd1fa3db80ce661da8c649767e18b13a2512ab095d54236f9b767c44"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.525025 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85f64689c7-r5skz" event={"ID":"bb83b759-9e8e-4e99-8193-f8dbf847f440","Type":"ContainerStarted","Data":"0085d1eb1604fb4548e9e43461f8bc7bf9ee54b12f800dfa9ff3c16b36293694"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.525110 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85f64689c7-r5skz" event={"ID":"bb83b759-9e8e-4e99-8193-f8dbf847f440","Type":"ContainerStarted","Data":"35453afda79e05c40c688fbf603711f9b08e9544185499f555f3bf2f1d350cb6"} Mar 13 20:49:03 crc kubenswrapper[5029]: I0313 20:49:03.527068 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"160773c1-ebe6-4b3b-b26d-5745cbf9ef70","Type":"ContainerStarted","Data":"ce4d5f28881921a2b6665cdede7f8a30d9b32d41ef9ab8c3708eadcc24b06e0c"} Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.546430 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6c6bfdcb-59kpl" event={"ID":"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9","Type":"ContainerStarted","Data":"f6af4dac6417db6513b5e2602d7469ad832100f41291a81205b348b878058d2d"} Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.550512 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf8f459d4-bj2jk" event={"ID":"da8a5250-75de-4986-ab96-2415b667cac1","Type":"ContainerStarted","Data":"ce6e450fa61563912ee229fa5e741c1f13eb1c053664e5a8dcb5162c835a2236"} Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.550980 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.561374 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa59f852-51b9-4576-9935-401acd4199bf","Type":"ContainerStarted","Data":"ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4"} Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.564913 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.565022 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.565268 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-674bcdb76-8wx84" event={"ID":"e88c424e-0503-40ac-9f24-5daa55912ff3","Type":"ContainerStarted","Data":"50a8b6d73eb01caa78dd36633b4a309f6f5e6fa22d126bbd09aba0e86a4af0b2"} Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.585136 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f6c6bfdcb-59kpl" podStartSLOduration=29.251434704 podStartE2EDuration="30.585115864s" podCreationTimestamp="2026-03-13 20:48:34 +0000 UTC" firstStartedPulling="2026-03-13 20:49:01.557584416 +0000 UTC m=+1301.573666819" lastFinishedPulling="2026-03-13 20:49:02.891265566 +0000 UTC m=+1302.907347979" observedRunningTime="2026-03-13 20:49:04.573708663 +0000 UTC m=+1304.589791076" watchObservedRunningTime="2026-03-13 20:49:04.585115864 +0000 UTC m=+1304.601198267" Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.606577 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7cf8f459d4-bj2jk" podStartSLOduration=6.606547258 podStartE2EDuration="6.606547258s" podCreationTimestamp="2026-03-13 20:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:04.604485922 +0000 UTC m=+1304.620568335" watchObservedRunningTime="2026-03-13 20:49:04.606547258 +0000 UTC m=+1304.622629661" Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.634049 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-674bcdb76-8wx84" podStartSLOduration=29.229650499 podStartE2EDuration="30.634024278s" podCreationTimestamp="2026-03-13 20:48:34 +0000 UTC" firstStartedPulling="2026-03-13 20:49:00.731882816 +0000 UTC m=+1300.747965219" lastFinishedPulling="2026-03-13 20:49:02.136256595 +0000 UTC m=+1302.152338998" observedRunningTime="2026-03-13 20:49:04.627217273 +0000 UTC m=+1304.643299696" watchObservedRunningTime="2026-03-13 20:49:04.634024278 +0000 UTC m=+1304.650106681" Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.824106 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:49:04 crc kubenswrapper[5029]: I0313 20:49:04.824388 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.582010 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" event={"ID":"6e726c0a-09e0-46c4-870f-440581c3af6e","Type":"ContainerStarted","Data":"ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742"} Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.582803 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.596064 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53004b20-47d0-461d-b054-fb52f7a78770","Type":"ContainerStarted","Data":"eedd795f1a4b53a18ee3b96fdd49986c1c746ca181f0e511ce6c35494e5c6c25"} Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.611116 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85f64689c7-r5skz" event={"ID":"bb83b759-9e8e-4e99-8193-f8dbf847f440","Type":"ContainerStarted","Data":"9fe692e61daa725a8bab8b13b6b6cd4a995542200a130715e3a5c4e3968886eb"} Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.611182 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" podStartSLOduration=7.61115553 podStartE2EDuration="7.61115553s" podCreationTimestamp="2026-03-13 20:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:05.602471913 +0000 UTC m=+1305.618554336" watchObservedRunningTime="2026-03-13 20:49:05.61115553 +0000 UTC m=+1305.627237943" Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.611492 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.632580 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa59f852-51b9-4576-9935-401acd4199bf","Type":"ContainerStarted","Data":"8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584"} Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.639164 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.639121023 podStartE2EDuration="8.639121023s" podCreationTimestamp="2026-03-13 20:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:05.629806799 +0000 UTC m=+1305.645889202" watchObservedRunningTime="2026-03-13 20:49:05.639121023 +0000 UTC m=+1305.655203446" Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.675760 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85f64689c7-r5skz" podStartSLOduration=5.675719111 podStartE2EDuration="5.675719111s" podCreationTimestamp="2026-03-13 20:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:05.65698724 +0000 UTC m=+1305.673069643" watchObservedRunningTime="2026-03-13 20:49:05.675719111 +0000 UTC m=+1305.691801514" Mar 13 20:49:05 crc kubenswrapper[5029]: I0313 20:49:05.700707 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=20.700667232 podStartE2EDuration="20.700667232s" podCreationTimestamp="2026-03-13 20:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:05.687764909 +0000 UTC m=+1305.703847332" watchObservedRunningTime="2026-03-13 20:49:05.700667232 +0000 UTC m=+1305.716749635" Mar 13 20:49:06 crc kubenswrapper[5029]: I0313 20:49:06.657809 5029 generic.go:334] "Generic (PLEG): container finished" podID="c75c1c18-27e6-4fae-bf58-03387b32e4f3" containerID="9bf8aca3ecad38f35de125da898a0c0e6af8f9f4a3e0c8b68486ea77d62d5a98" exitCode=0 Mar 13 20:49:06 crc kubenswrapper[5029]: I0313 20:49:06.658083 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qcjtl" event={"ID":"c75c1c18-27e6-4fae-bf58-03387b32e4f3","Type":"ContainerDied","Data":"9bf8aca3ecad38f35de125da898a0c0e6af8f9f4a3e0c8b68486ea77d62d5a98"} Mar 13 20:49:07 crc kubenswrapper[5029]: I0313 20:49:07.708045 5029 generic.go:334] "Generic (PLEG): container finished" podID="c020ac40-202f-4f46-b658-f1cce4d0ad1d" containerID="eb363144f5c395245f0b4d6a0c350450122f7f56b89bdb7dd188585b3c859838" exitCode=0 Mar 13 20:49:07 crc kubenswrapper[5029]: I0313 20:49:07.708288 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xmjp6" event={"ID":"c020ac40-202f-4f46-b658-f1cce4d0ad1d","Type":"ContainerDied","Data":"eb363144f5c395245f0b4d6a0c350450122f7f56b89bdb7dd188585b3c859838"} Mar 13 20:49:07 crc kubenswrapper[5029]: I0313 20:49:07.744279 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 13 20:49:07 crc kubenswrapper[5029]: I0313 20:49:07.746211 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 13 20:49:07 crc kubenswrapper[5029]: I0313 20:49:07.795134 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 13 20:49:07 crc kubenswrapper[5029]: I0313 20:49:07.843096 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 13 20:49:08 crc kubenswrapper[5029]: I0313 20:49:08.720547 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 20:49:08 crc kubenswrapper[5029]: I0313 20:49:08.721157 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.160817 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qcjtl" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.212540 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-config-data\") pod \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.213012 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c1c18-27e6-4fae-bf58-03387b32e4f3-logs\") pod \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.213218 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-combined-ca-bundle\") pod \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.213369 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5vdw\" (UniqueName: \"kubernetes.io/projected/c75c1c18-27e6-4fae-bf58-03387b32e4f3-kube-api-access-m5vdw\") pod \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.213427 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-scripts\") pod \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\" (UID: \"c75c1c18-27e6-4fae-bf58-03387b32e4f3\") " Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.213915 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75c1c18-27e6-4fae-bf58-03387b32e4f3-logs" (OuterVolumeSpecName: "logs") pod "c75c1c18-27e6-4fae-bf58-03387b32e4f3" (UID: "c75c1c18-27e6-4fae-bf58-03387b32e4f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.214062 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75c1c18-27e6-4fae-bf58-03387b32e4f3-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.234304 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75c1c18-27e6-4fae-bf58-03387b32e4f3-kube-api-access-m5vdw" (OuterVolumeSpecName: "kube-api-access-m5vdw") pod "c75c1c18-27e6-4fae-bf58-03387b32e4f3" (UID: "c75c1c18-27e6-4fae-bf58-03387b32e4f3"). InnerVolumeSpecName "kube-api-access-m5vdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.234941 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-scripts" (OuterVolumeSpecName: "scripts") pod "c75c1c18-27e6-4fae-bf58-03387b32e4f3" (UID: "c75c1c18-27e6-4fae-bf58-03387b32e4f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.255046 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-config-data" (OuterVolumeSpecName: "config-data") pod "c75c1c18-27e6-4fae-bf58-03387b32e4f3" (UID: "c75c1c18-27e6-4fae-bf58-03387b32e4f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.260525 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c75c1c18-27e6-4fae-bf58-03387b32e4f3" (UID: "c75c1c18-27e6-4fae-bf58-03387b32e4f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.316438 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.316477 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5vdw\" (UniqueName: \"kubernetes.io/projected/c75c1c18-27e6-4fae-bf58-03387b32e4f3-kube-api-access-m5vdw\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.316493 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.316505 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c1c18-27e6-4fae-bf58-03387b32e4f3-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.736228 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qcjtl" Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.736417 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qcjtl" event={"ID":"c75c1c18-27e6-4fae-bf58-03387b32e4f3","Type":"ContainerDied","Data":"3f86ef07a7494f1abd21fe92e65d8b85df33dbdb056fb1b00b119996635e4f64"} Mar 13 20:49:09 crc kubenswrapper[5029]: I0313 20:49:09.736454 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f86ef07a7494f1abd21fe92e65d8b85df33dbdb056fb1b00b119996635e4f64" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.301350 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85bfd56bd4-bs6qf"] Mar 13 20:49:10 crc kubenswrapper[5029]: E0313 20:49:10.301747 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c1c18-27e6-4fae-bf58-03387b32e4f3" containerName="placement-db-sync" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.301765 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c1c18-27e6-4fae-bf58-03387b32e4f3" containerName="placement-db-sync" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.301963 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75c1c18-27e6-4fae-bf58-03387b32e4f3" containerName="placement-db-sync" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.302868 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.306022 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.307787 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.307982 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.308092 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.308388 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xx5k2" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.338525 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-logs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.338639 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-combined-ca-bundle\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.338668 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-scripts\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.338693 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-internal-tls-certs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.338747 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgb8g\" (UniqueName: \"kubernetes.io/projected/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-kube-api-access-tgb8g\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.338776 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-config-data\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.338794 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-public-tls-certs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.343085 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85bfd56bd4-bs6qf"] Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.440584 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-logs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.441306 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-combined-ca-bundle\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.441422 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-scripts\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.441502 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-internal-tls-certs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.441614 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgb8g\" (UniqueName: \"kubernetes.io/projected/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-kube-api-access-tgb8g\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.441702 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-config-data\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.441777 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-public-tls-certs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.441368 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-logs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.458762 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-combined-ca-bundle\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.460294 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-config-data\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.461212 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-internal-tls-certs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.464303 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgb8g\" (UniqueName: \"kubernetes.io/projected/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-kube-api-access-tgb8g\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.464572 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-scripts\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.464964 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-public-tls-certs\") pod \"placement-85bfd56bd4-bs6qf\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:10 crc kubenswrapper[5029]: I0313 20:49:10.670308 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:11 crc kubenswrapper[5029]: I0313 20:49:11.458589 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.132787 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.176136 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-fernet-keys\") pod \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.176390 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8t5m\" (UniqueName: \"kubernetes.io/projected/c020ac40-202f-4f46-b658-f1cce4d0ad1d-kube-api-access-p8t5m\") pod \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.176498 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-credential-keys\") pod \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.176647 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-config-data\") pod \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.176767 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-combined-ca-bundle\") pod \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.176841 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-scripts\") pod \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\" (UID: \"c020ac40-202f-4f46-b658-f1cce4d0ad1d\") " Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.195155 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c020ac40-202f-4f46-b658-f1cce4d0ad1d-kube-api-access-p8t5m" (OuterVolumeSpecName: "kube-api-access-p8t5m") pod "c020ac40-202f-4f46-b658-f1cce4d0ad1d" (UID: "c020ac40-202f-4f46-b658-f1cce4d0ad1d"). InnerVolumeSpecName "kube-api-access-p8t5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.195562 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c020ac40-202f-4f46-b658-f1cce4d0ad1d" (UID: "c020ac40-202f-4f46-b658-f1cce4d0ad1d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.195702 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-scripts" (OuterVolumeSpecName: "scripts") pod "c020ac40-202f-4f46-b658-f1cce4d0ad1d" (UID: "c020ac40-202f-4f46-b658-f1cce4d0ad1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.217521 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c020ac40-202f-4f46-b658-f1cce4d0ad1d" (UID: "c020ac40-202f-4f46-b658-f1cce4d0ad1d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.229307 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-config-data" (OuterVolumeSpecName: "config-data") pod "c020ac40-202f-4f46-b658-f1cce4d0ad1d" (UID: "c020ac40-202f-4f46-b658-f1cce4d0ad1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.241697 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c020ac40-202f-4f46-b658-f1cce4d0ad1d" (UID: "c020ac40-202f-4f46-b658-f1cce4d0ad1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.281469 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8t5m\" (UniqueName: \"kubernetes.io/projected/c020ac40-202f-4f46-b658-f1cce4d0ad1d-kube-api-access-p8t5m\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.281559 5029 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.281572 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.281611 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.281625 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.281637 5029 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c020ac40-202f-4f46-b658-f1cce4d0ad1d-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.559071 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.585118 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85bfd56bd4-bs6qf"] Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.777433 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"160773c1-ebe6-4b3b-b26d-5745cbf9ef70","Type":"ContainerStarted","Data":"509a4f50aba58146b47f7a6520124efbe21784b4cda8e178e183d3168e9b3af9"} Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.782435 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xmjp6" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.782504 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xmjp6" event={"ID":"c020ac40-202f-4f46-b658-f1cce4d0ad1d","Type":"ContainerDied","Data":"2c4624736c5a2529967cf5338e157ec829dbe6e40d8ca15b3f90dd828e63dcbf"} Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.782581 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c4624736c5a2529967cf5338e157ec829dbe6e40d8ca15b3f90dd828e63dcbf" Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.784423 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bfd56bd4-bs6qf" event={"ID":"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb","Type":"ContainerStarted","Data":"eac05de69cf3415ff6ff831c5c2810d904da1ce1954426aaa4fa963d923d03ce"} Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.788175 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5hp5" event={"ID":"a5243e50-28ff-4f5c-aeb1-97a87b1f2765","Type":"ContainerStarted","Data":"24df77b155f847d4bcfc3c6cde67d1e0e0eeea2cfb3bba52cef98d37b9c28f3a"} Mar 13 20:49:12 crc kubenswrapper[5029]: I0313 20:49:12.816108 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h5hp5" podStartSLOduration=2.932350623 podStartE2EDuration="47.816083479s" podCreationTimestamp="2026-03-13 20:48:25 +0000 UTC" firstStartedPulling="2026-03-13 20:48:27.292080922 +0000 UTC m=+1267.308163325" lastFinishedPulling="2026-03-13 20:49:12.175813778 +0000 UTC m=+1312.191896181" observedRunningTime="2026-03-13 20:49:12.805063848 +0000 UTC m=+1312.821146251" watchObservedRunningTime="2026-03-13 20:49:12.816083479 +0000 UTC m=+1312.832165882" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.283366 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bb76fc874-xq9l8"] Mar 13 20:49:13 crc kubenswrapper[5029]: E0313 20:49:13.284541 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c020ac40-202f-4f46-b658-f1cce4d0ad1d" containerName="keystone-bootstrap" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.284565 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c020ac40-202f-4f46-b658-f1cce4d0ad1d" containerName="keystone-bootstrap" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.284773 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c020ac40-202f-4f46-b658-f1cce4d0ad1d" containerName="keystone-bootstrap" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.285643 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.289619 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.292079 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.292256 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.292426 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.292529 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.319379 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bb76fc874-xq9l8"] Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.330972 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qpzzs" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.428841 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-config-data\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.432297 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2p7\" (UniqueName: \"kubernetes.io/projected/07e8467c-2f07-49d5-8c20-a33c8f9d4291-kube-api-access-5t2p7\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.432480 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-internal-tls-certs\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.432656 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-scripts\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.432843 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-fernet-keys\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.433387 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-credential-keys\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.433497 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-public-tls-certs\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.433801 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-combined-ca-bundle\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.536363 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-config-data\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.536441 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2p7\" (UniqueName: \"kubernetes.io/projected/07e8467c-2f07-49d5-8c20-a33c8f9d4291-kube-api-access-5t2p7\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.536465 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-internal-tls-certs\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.536488 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-scripts\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.536526 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-fernet-keys\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.536557 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-credential-keys\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.536576 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-public-tls-certs\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.536625 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-combined-ca-bundle\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.545327 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-combined-ca-bundle\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.545920 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-scripts\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.552711 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-fernet-keys\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.553889 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-config-data\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.554030 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-credential-keys\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.561650 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-internal-tls-certs\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.563527 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e8467c-2f07-49d5-8c20-a33c8f9d4291-public-tls-certs\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.571519 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2p7\" (UniqueName: \"kubernetes.io/projected/07e8467c-2f07-49d5-8c20-a33c8f9d4291-kube-api-access-5t2p7\") pod \"keystone-7bb76fc874-xq9l8\" (UID: \"07e8467c-2f07-49d5-8c20-a33c8f9d4291\") " pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.618780 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.740049 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.828676 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bfd56bd4-bs6qf" event={"ID":"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb","Type":"ContainerStarted","Data":"2756430389c114d884682574fa01ce0a3d40540564fd8713da5614cfc51abb29"} Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.828746 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bfd56bd4-bs6qf" event={"ID":"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb","Type":"ContainerStarted","Data":"fc64aef6dfbf66b739e8f304c9cfd646cfaa779da452629bad2eb084763b2b31"} Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.829191 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.829281 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.831569 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rkc9f"] Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.832030 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" podUID="eb18b58b-6d93-4ca4-b191-161234269f8b" containerName="dnsmasq-dns" containerID="cri-o://ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295" gracePeriod=10 Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.851089 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-76l7z" event={"ID":"e27175d1-38d4-4709-9d98-b71adc445f02","Type":"ContainerStarted","Data":"7861a777c5174a282b3807ef824a4d516283ea7280d71642c0d0819694271996"} Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.888194 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85bfd56bd4-bs6qf" podStartSLOduration=3.888163171 podStartE2EDuration="3.888163171s" podCreationTimestamp="2026-03-13 20:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:13.872551995 +0000 UTC m=+1313.888634418" watchObservedRunningTime="2026-03-13 20:49:13.888163171 +0000 UTC m=+1313.904245574" Mar 13 20:49:13 crc kubenswrapper[5029]: I0313 20:49:13.936773 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-76l7z" podStartSLOduration=4.35224677 podStartE2EDuration="48.936734136s" podCreationTimestamp="2026-03-13 20:48:25 +0000 UTC" firstStartedPulling="2026-03-13 20:48:27.597398137 +0000 UTC m=+1267.613480540" lastFinishedPulling="2026-03-13 20:49:12.181885503 +0000 UTC m=+1312.197967906" observedRunningTime="2026-03-13 20:49:13.90133053 +0000 UTC m=+1313.917412943" watchObservedRunningTime="2026-03-13 20:49:13.936734136 +0000 UTC m=+1313.952816549" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.302125 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bb76fc874-xq9l8"] Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.589391 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6c6bfdcb-59kpl" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.636320 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.688475 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-config\") pod \"eb18b58b-6d93-4ca4-b191-161234269f8b\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.688550 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-nb\") pod \"eb18b58b-6d93-4ca4-b191-161234269f8b\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.688623 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv4ng\" (UniqueName: \"kubernetes.io/projected/eb18b58b-6d93-4ca4-b191-161234269f8b-kube-api-access-nv4ng\") pod \"eb18b58b-6d93-4ca4-b191-161234269f8b\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.688663 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-svc\") pod \"eb18b58b-6d93-4ca4-b191-161234269f8b\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.688732 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-swift-storage-0\") pod \"eb18b58b-6d93-4ca4-b191-161234269f8b\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.688760 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-sb\") pod \"eb18b58b-6d93-4ca4-b191-161234269f8b\" (UID: \"eb18b58b-6d93-4ca4-b191-161234269f8b\") " Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.728270 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb18b58b-6d93-4ca4-b191-161234269f8b-kube-api-access-nv4ng" (OuterVolumeSpecName: "kube-api-access-nv4ng") pod "eb18b58b-6d93-4ca4-b191-161234269f8b" (UID: "eb18b58b-6d93-4ca4-b191-161234269f8b"). InnerVolumeSpecName "kube-api-access-nv4ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.806166 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb18b58b-6d93-4ca4-b191-161234269f8b" (UID: "eb18b58b-6d93-4ca4-b191-161234269f8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.813790 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb18b58b-6d93-4ca4-b191-161234269f8b" (UID: "eb18b58b-6d93-4ca4-b191-161234269f8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.813805 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-config" (OuterVolumeSpecName: "config") pod "eb18b58b-6d93-4ca4-b191-161234269f8b" (UID: "eb18b58b-6d93-4ca4-b191-161234269f8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.834758 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv4ng\" (UniqueName: \"kubernetes.io/projected/eb18b58b-6d93-4ca4-b191-161234269f8b-kube-api-access-nv4ng\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.834798 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.834812 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.834822 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.847360 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb18b58b-6d93-4ca4-b191-161234269f8b" (UID: "eb18b58b-6d93-4ca4-b191-161234269f8b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.847514 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb18b58b-6d93-4ca4-b191-161234269f8b" (UID: "eb18b58b-6d93-4ca4-b191-161234269f8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.865194 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-674bcdb76-8wx84" podUID="e88c424e-0503-40ac-9f24-5daa55912ff3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.937262 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.937302 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb18b58b-6d93-4ca4-b191-161234269f8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.965206 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bb76fc874-xq9l8" event={"ID":"07e8467c-2f07-49d5-8c20-a33c8f9d4291","Type":"ContainerStarted","Data":"a04734641ccb4f771215f94955907212dbe0cb2caeb96b43ce9cbc8e4bcc2347"} Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.988920 5029 generic.go:334] "Generic (PLEG): container finished" podID="eb18b58b-6d93-4ca4-b191-161234269f8b" containerID="ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295" exitCode=0 Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.989059 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.989037 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" event={"ID":"eb18b58b-6d93-4ca4-b191-161234269f8b","Type":"ContainerDied","Data":"ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295"} Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.989691 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rkc9f" event={"ID":"eb18b58b-6d93-4ca4-b191-161234269f8b","Type":"ContainerDied","Data":"6f2d27bed78aa83d46924117d8d88af3e6e68d245089ec303b96f5e2eea7119e"} Mar 13 20:49:14 crc kubenswrapper[5029]: I0313 20:49:14.989717 5029 scope.go:117] "RemoveContainer" containerID="ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.040265 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rkc9f"] Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.045976 5029 scope.go:117] "RemoveContainer" containerID="ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.053083 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rkc9f"] Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.115211 5029 scope.go:117] "RemoveContainer" containerID="ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295" Mar 13 20:49:15 crc kubenswrapper[5029]: E0313 20:49:15.116369 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295\": container with ID starting with ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295 not found: ID does not exist" containerID="ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.116426 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295"} err="failed to get container status \"ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295\": rpc error: code = NotFound desc = could not find container \"ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295\": container with ID starting with ebc4bd31609c99355b041eeeb9f793432a0fc9aa44dbdad1568793bdc47dc295 not found: ID does not exist" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.116455 5029 scope.go:117] "RemoveContainer" containerID="ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff" Mar 13 20:49:15 crc kubenswrapper[5029]: E0313 20:49:15.117049 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff\": container with ID starting with ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff not found: ID does not exist" containerID="ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.117114 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff"} err="failed to get container status \"ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff\": rpc error: code = NotFound desc = could not find container \"ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff\": container with ID starting with ca1d269277651f942eddc2011f852fdb0ae0ed6670188dd795910bf7144c08ff not found: ID does not exist" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.518471 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.518997 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.519018 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.519031 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.575721 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 13 20:49:15 crc kubenswrapper[5029]: I0313 20:49:15.576283 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 13 20:49:16 crc kubenswrapper[5029]: I0313 20:49:16.026725 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xhhzb" event={"ID":"e5a13c03-b012-4416-bb5b-3ff21417290a","Type":"ContainerStarted","Data":"bb6621cafaff21691e905a7a332368bcd78b04c74bea53ae656f96b343a4c154"} Mar 13 20:49:16 crc kubenswrapper[5029]: I0313 20:49:16.034443 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bb76fc874-xq9l8" event={"ID":"07e8467c-2f07-49d5-8c20-a33c8f9d4291","Type":"ContainerStarted","Data":"99887b06177296d7faa253d2c424976adba72f9263aa6cbec5a4e484feb64301"} Mar 13 20:49:16 crc kubenswrapper[5029]: I0313 20:49:16.035232 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:16 crc kubenswrapper[5029]: I0313 20:49:16.055465 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xhhzb" podStartSLOduration=4.580101211 podStartE2EDuration="52.055448136s" podCreationTimestamp="2026-03-13 20:48:24 +0000 UTC" firstStartedPulling="2026-03-13 20:48:26.784932149 +0000 UTC m=+1266.801014562" lastFinishedPulling="2026-03-13 20:49:14.260279084 +0000 UTC m=+1314.276361487" observedRunningTime="2026-03-13 20:49:16.04938789 +0000 UTC m=+1316.065470303" watchObservedRunningTime="2026-03-13 20:49:16.055448136 +0000 UTC m=+1316.071530539" Mar 13 20:49:16 crc kubenswrapper[5029]: I0313 20:49:16.082697 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7bb76fc874-xq9l8" podStartSLOduration=3.082674939 podStartE2EDuration="3.082674939s" podCreationTimestamp="2026-03-13 20:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:16.068338498 +0000 UTC m=+1316.084420901" watchObservedRunningTime="2026-03-13 20:49:16.082674939 +0000 UTC m=+1316.098757352" Mar 13 20:49:16 crc kubenswrapper[5029]: I0313 20:49:16.618795 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb18b58b-6d93-4ca4-b191-161234269f8b" path="/var/lib/kubelet/pods/eb18b58b-6d93-4ca4-b191-161234269f8b/volumes" Mar 13 20:49:17 crc kubenswrapper[5029]: I0313 20:49:17.045470 5029 generic.go:334] "Generic (PLEG): container finished" podID="a5243e50-28ff-4f5c-aeb1-97a87b1f2765" containerID="24df77b155f847d4bcfc3c6cde67d1e0e0eeea2cfb3bba52cef98d37b9c28f3a" exitCode=0 Mar 13 20:49:17 crc kubenswrapper[5029]: I0313 20:49:17.045548 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5hp5" event={"ID":"a5243e50-28ff-4f5c-aeb1-97a87b1f2765","Type":"ContainerDied","Data":"24df77b155f847d4bcfc3c6cde67d1e0e0eeea2cfb3bba52cef98d37b9c28f3a"} Mar 13 20:49:18 crc kubenswrapper[5029]: I0313 20:49:18.584597 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 13 20:49:18 crc kubenswrapper[5029]: I0313 20:49:18.585114 5029 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 20:49:18 crc kubenswrapper[5029]: I0313 20:49:18.765030 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.682689 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.757800 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-db-sync-config-data\") pod \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.757932 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd5cj\" (UniqueName: \"kubernetes.io/projected/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-kube-api-access-gd5cj\") pod \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.758138 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-combined-ca-bundle\") pod \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\" (UID: \"a5243e50-28ff-4f5c-aeb1-97a87b1f2765\") " Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.766166 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a5243e50-28ff-4f5c-aeb1-97a87b1f2765" (UID: "a5243e50-28ff-4f5c-aeb1-97a87b1f2765"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.766315 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-kube-api-access-gd5cj" (OuterVolumeSpecName: "kube-api-access-gd5cj") pod "a5243e50-28ff-4f5c-aeb1-97a87b1f2765" (UID: "a5243e50-28ff-4f5c-aeb1-97a87b1f2765"). InnerVolumeSpecName "kube-api-access-gd5cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.795311 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5243e50-28ff-4f5c-aeb1-97a87b1f2765" (UID: "a5243e50-28ff-4f5c-aeb1-97a87b1f2765"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.861709 5029 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.861749 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd5cj\" (UniqueName: \"kubernetes.io/projected/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-kube-api-access-gd5cj\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:22 crc kubenswrapper[5029]: I0313 20:49:22.861762 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5243e50-28ff-4f5c-aeb1-97a87b1f2765-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:23 crc kubenswrapper[5029]: I0313 20:49:23.119646 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5hp5" event={"ID":"a5243e50-28ff-4f5c-aeb1-97a87b1f2765","Type":"ContainerDied","Data":"ba74b0044d5e1eb920e205fed6eaa760a4a5d87f7967ca56b5035ea19baa1c5a"} Mar 13 20:49:23 crc kubenswrapper[5029]: I0313 20:49:23.119695 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba74b0044d5e1eb920e205fed6eaa760a4a5d87f7967ca56b5035ea19baa1c5a" Mar 13 20:49:23 crc kubenswrapper[5029]: I0313 20:49:23.119757 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5hp5" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.051383 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5bd9d96d9f-z6km7"] Mar 13 20:49:24 crc kubenswrapper[5029]: E0313 20:49:24.052240 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5243e50-28ff-4f5c-aeb1-97a87b1f2765" containerName="barbican-db-sync" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.052260 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5243e50-28ff-4f5c-aeb1-97a87b1f2765" containerName="barbican-db-sync" Mar 13 20:49:24 crc kubenswrapper[5029]: E0313 20:49:24.052291 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb18b58b-6d93-4ca4-b191-161234269f8b" containerName="dnsmasq-dns" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.052300 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb18b58b-6d93-4ca4-b191-161234269f8b" containerName="dnsmasq-dns" Mar 13 20:49:24 crc kubenswrapper[5029]: E0313 20:49:24.052319 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb18b58b-6d93-4ca4-b191-161234269f8b" containerName="init" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.052328 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb18b58b-6d93-4ca4-b191-161234269f8b" containerName="init" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.052538 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb18b58b-6d93-4ca4-b191-161234269f8b" containerName="dnsmasq-dns" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.052581 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5243e50-28ff-4f5c-aeb1-97a87b1f2765" containerName="barbican-db-sync" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.062016 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bd9d96d9f-z6km7"] Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.062157 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.064722 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-c6vdb" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.068525 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.069664 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65fd679d74-klxb9"] Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.071831 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.075061 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.078045 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.112892 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65fd679d74-klxb9"] Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.156570 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-242b6"] Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.158096 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.203885 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-242b6"] Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205186 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-combined-ca-bundle\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205240 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2251506-8d45-43fb-b88b-3fc76a486e60-logs\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205285 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-config-data-custom\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205310 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7975f817-5324-4ea2-9f48-7d83b39c2fab-logs\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205366 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-config-data-custom\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205404 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9w8\" (UniqueName: \"kubernetes.io/projected/f2251506-8d45-43fb-b88b-3fc76a486e60-kube-api-access-td9w8\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205433 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmn42\" (UniqueName: \"kubernetes.io/projected/7975f817-5324-4ea2-9f48-7d83b39c2fab-kube-api-access-xmn42\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205467 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-config-data\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205598 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-combined-ca-bundle\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.205632 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-config-data\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.287149 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-567cdd7cd-vrz7b"] Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.290006 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.299001 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-567cdd7cd-vrz7b"] Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.314744 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljbx\" (UniqueName: \"kubernetes.io/projected/af00eab7-5ce5-4058-8328-631a7103290c-kube-api-access-fljbx\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.314802 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.314902 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-combined-ca-bundle\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.314934 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-svc\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.314969 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-config-data\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315123 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315184 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-combined-ca-bundle\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315221 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2251506-8d45-43fb-b88b-3fc76a486e60-logs\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315258 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-config-data-custom\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315285 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7975f817-5324-4ea2-9f48-7d83b39c2fab-logs\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315316 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-config\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315390 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-config-data-custom\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315421 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9w8\" (UniqueName: \"kubernetes.io/projected/f2251506-8d45-43fb-b88b-3fc76a486e60-kube-api-access-td9w8\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315445 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmn42\" (UniqueName: \"kubernetes.io/projected/7975f817-5324-4ea2-9f48-7d83b39c2fab-kube-api-access-xmn42\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315474 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-config-data\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.315512 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.319343 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2251506-8d45-43fb-b88b-3fc76a486e60-logs\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.319698 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7975f817-5324-4ea2-9f48-7d83b39c2fab-logs\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.320724 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-config-data-custom\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.320973 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.321343 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-config-data\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.324954 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-config-data\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.326202 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-config-data-custom\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.341742 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7975f817-5324-4ea2-9f48-7d83b39c2fab-combined-ca-bundle\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.343422 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2251506-8d45-43fb-b88b-3fc76a486e60-combined-ca-bundle\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.344907 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmn42\" (UniqueName: \"kubernetes.io/projected/7975f817-5324-4ea2-9f48-7d83b39c2fab-kube-api-access-xmn42\") pod \"barbican-keystone-listener-65fd679d74-klxb9\" (UID: \"7975f817-5324-4ea2-9f48-7d83b39c2fab\") " pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.346635 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9w8\" (UniqueName: \"kubernetes.io/projected/f2251506-8d45-43fb-b88b-3fc76a486e60-kube-api-access-td9w8\") pod \"barbican-worker-5bd9d96d9f-z6km7\" (UID: \"f2251506-8d45-43fb-b88b-3fc76a486e60\") " pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.416634 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bd9d96d9f-z6km7" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.417531 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.417590 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-combined-ca-bundle\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.417659 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fljbx\" (UniqueName: \"kubernetes.io/projected/af00eab7-5ce5-4058-8328-631a7103290c-kube-api-access-fljbx\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.417682 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm7db\" (UniqueName: \"kubernetes.io/projected/0827f1c5-a1b0-435f-9649-695e40413d18-kube-api-access-zm7db\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.417738 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.417791 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-svc\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.417940 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.417989 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data-custom\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.418058 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-config\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.418113 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0827f1c5-a1b0-435f-9649-695e40413d18-logs\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.418152 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.419379 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.419468 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.419509 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.420017 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-svc\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.420159 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-config\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.438534 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fljbx\" (UniqueName: \"kubernetes.io/projected/af00eab7-5ce5-4058-8328-631a7103290c-kube-api-access-fljbx\") pod \"dnsmasq-dns-85ff748b95-242b6\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.453058 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.515154 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.519917 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data-custom\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.519970 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0827f1c5-a1b0-435f-9649-695e40413d18-logs\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.520003 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.520033 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-combined-ca-bundle\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.520068 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm7db\" (UniqueName: \"kubernetes.io/projected/0827f1c5-a1b0-435f-9649-695e40413d18-kube-api-access-zm7db\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.520806 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0827f1c5-a1b0-435f-9649-695e40413d18-logs\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.524322 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-combined-ca-bundle\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.524736 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data-custom\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.530117 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.539509 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm7db\" (UniqueName: \"kubernetes.io/projected/0827f1c5-a1b0-435f-9649-695e40413d18-kube-api-access-zm7db\") pod \"barbican-api-567cdd7cd-vrz7b\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.565738 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6c6bfdcb-59kpl" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.617274 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:24 crc kubenswrapper[5029]: I0313 20:49:24.825671 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-674bcdb76-8wx84" podUID="e88c424e-0503-40ac-9f24-5daa55912ff3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Mar 13 20:49:25 crc kubenswrapper[5029]: E0313 20:49:25.177710 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" Mar 13 20:49:25 crc kubenswrapper[5029]: I0313 20:49:25.414892 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bd9d96d9f-z6km7"] Mar 13 20:49:25 crc kubenswrapper[5029]: W0313 20:49:25.425077 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf00eab7_5ce5_4058_8328_631a7103290c.slice/crio-0775a3b871994edc4bf0e65d3a54f96ec4f09076bcc5666cbe455214ab7595b3 WatchSource:0}: Error finding container 0775a3b871994edc4bf0e65d3a54f96ec4f09076bcc5666cbe455214ab7595b3: Status 404 returned error can't find the container with id 0775a3b871994edc4bf0e65d3a54f96ec4f09076bcc5666cbe455214ab7595b3 Mar 13 20:49:25 crc kubenswrapper[5029]: W0313 20:49:25.425869 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2251506_8d45_43fb_b88b_3fc76a486e60.slice/crio-8df7ee065d527d10866a75b598bbe681ab7fca35d5cb27c8d74649c8dc6cc0d3 WatchSource:0}: Error finding container 8df7ee065d527d10866a75b598bbe681ab7fca35d5cb27c8d74649c8dc6cc0d3: Status 404 returned error can't find the container with id 8df7ee065d527d10866a75b598bbe681ab7fca35d5cb27c8d74649c8dc6cc0d3 Mar 13 20:49:25 crc kubenswrapper[5029]: I0313 20:49:25.432209 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-242b6"] Mar 13 20:49:25 crc kubenswrapper[5029]: I0313 20:49:25.582031 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65fd679d74-klxb9"] Mar 13 20:49:25 crc kubenswrapper[5029]: I0313 20:49:25.598410 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-567cdd7cd-vrz7b"] Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.157557 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bd9d96d9f-z6km7" event={"ID":"f2251506-8d45-43fb-b88b-3fc76a486e60","Type":"ContainerStarted","Data":"8df7ee065d527d10866a75b598bbe681ab7fca35d5cb27c8d74649c8dc6cc0d3"} Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.158995 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" event={"ID":"7975f817-5324-4ea2-9f48-7d83b39c2fab","Type":"ContainerStarted","Data":"aa488826dbcb9730d578131fb5ca46aeb2e2cb98614f90b12a8757ba35e7ddf6"} Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.161707 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-567cdd7cd-vrz7b" event={"ID":"0827f1c5-a1b0-435f-9649-695e40413d18","Type":"ContainerStarted","Data":"30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb"} Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.161744 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-567cdd7cd-vrz7b" event={"ID":"0827f1c5-a1b0-435f-9649-695e40413d18","Type":"ContainerStarted","Data":"59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013"} Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.161755 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-567cdd7cd-vrz7b" event={"ID":"0827f1c5-a1b0-435f-9649-695e40413d18","Type":"ContainerStarted","Data":"d95d356083a93b07d6d3d5d6965480332a44c6baddd19966ff3e764df2009879"} Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.161879 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.164103 5029 generic.go:334] "Generic (PLEG): container finished" podID="af00eab7-5ce5-4058-8328-631a7103290c" containerID="53501a280e8a464e8f1815d471da1f16e010c6999a7685e0fb1eaade94e82a70" exitCode=0 Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.164183 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-242b6" event={"ID":"af00eab7-5ce5-4058-8328-631a7103290c","Type":"ContainerDied","Data":"53501a280e8a464e8f1815d471da1f16e010c6999a7685e0fb1eaade94e82a70"} Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.164227 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-242b6" event={"ID":"af00eab7-5ce5-4058-8328-631a7103290c","Type":"ContainerStarted","Data":"0775a3b871994edc4bf0e65d3a54f96ec4f09076bcc5666cbe455214ab7595b3"} Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.171442 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"160773c1-ebe6-4b3b-b26d-5745cbf9ef70","Type":"ContainerStarted","Data":"1918a216e74582b1cc35a8e106d5c3de78a51b7b18a5073a244da3af3cd5a518"} Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.171618 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="ceilometer-notification-agent" containerID="cri-o://ce4d5f28881921a2b6665cdede7f8a30d9b32d41ef9ab8c3708eadcc24b06e0c" gracePeriod=30 Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.171815 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.171836 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="proxy-httpd" containerID="cri-o://1918a216e74582b1cc35a8e106d5c3de78a51b7b18a5073a244da3af3cd5a518" gracePeriod=30 Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.171880 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="sg-core" containerID="cri-o://509a4f50aba58146b47f7a6520124efbe21784b4cda8e178e183d3168e9b3af9" gracePeriod=30 Mar 13 20:49:26 crc kubenswrapper[5029]: I0313 20:49:26.191955 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-567cdd7cd-vrz7b" podStartSLOduration=2.191926154 podStartE2EDuration="2.191926154s" podCreationTimestamp="2026-03-13 20:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:26.179541286 +0000 UTC m=+1326.195623689" watchObservedRunningTime="2026-03-13 20:49:26.191926154 +0000 UTC m=+1326.208008557" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.186482 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-242b6" event={"ID":"af00eab7-5ce5-4058-8328-631a7103290c","Type":"ContainerStarted","Data":"3a02b91389bdfa8cb01c5619ffbaba17d3e9b2a3bcd1946cbf442e711f276ecc"} Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.186896 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.191028 5029 generic.go:334] "Generic (PLEG): container finished" podID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerID="1918a216e74582b1cc35a8e106d5c3de78a51b7b18a5073a244da3af3cd5a518" exitCode=0 Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.191062 5029 generic.go:334] "Generic (PLEG): container finished" podID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerID="509a4f50aba58146b47f7a6520124efbe21784b4cda8e178e183d3168e9b3af9" exitCode=2 Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.191095 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"160773c1-ebe6-4b3b-b26d-5745cbf9ef70","Type":"ContainerDied","Data":"1918a216e74582b1cc35a8e106d5c3de78a51b7b18a5073a244da3af3cd5a518"} Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.191133 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"160773c1-ebe6-4b3b-b26d-5745cbf9ef70","Type":"ContainerDied","Data":"509a4f50aba58146b47f7a6520124efbe21784b4cda8e178e183d3168e9b3af9"} Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.191309 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.208299 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-242b6" podStartSLOduration=3.208283185 podStartE2EDuration="3.208283185s" podCreationTimestamp="2026-03-13 20:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:27.203991779 +0000 UTC m=+1327.220074172" watchObservedRunningTime="2026-03-13 20:49:27.208283185 +0000 UTC m=+1327.224365588" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.468394 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57db7d86f6-rjplz"] Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.471106 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.473384 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.474670 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.487277 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57db7d86f6-rjplz"] Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.602090 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5l2\" (UniqueName: \"kubernetes.io/projected/441f7f6f-8c00-4ae7-a970-b199a5d94c55-kube-api-access-bt5l2\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.602235 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-config-data-custom\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.602292 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-combined-ca-bundle\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.602316 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-internal-tls-certs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.602457 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/441f7f6f-8c00-4ae7-a970-b199a5d94c55-logs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.602479 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-public-tls-certs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.602508 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-config-data\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.705115 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-combined-ca-bundle\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.705255 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-internal-tls-certs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.705563 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/441f7f6f-8c00-4ae7-a970-b199a5d94c55-logs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.705590 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-public-tls-certs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.705645 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-config-data\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.705725 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5l2\" (UniqueName: \"kubernetes.io/projected/441f7f6f-8c00-4ae7-a970-b199a5d94c55-kube-api-access-bt5l2\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.706556 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-config-data-custom\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.706755 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/441f7f6f-8c00-4ae7-a970-b199a5d94c55-logs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.712249 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-internal-tls-certs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.712425 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-config-data-custom\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.720222 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-combined-ca-bundle\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.720474 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-public-tls-certs\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.724273 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441f7f6f-8c00-4ae7-a970-b199a5d94c55-config-data\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.728097 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5l2\" (UniqueName: \"kubernetes.io/projected/441f7f6f-8c00-4ae7-a970-b199a5d94c55-kube-api-access-bt5l2\") pod \"barbican-api-57db7d86f6-rjplz\" (UID: \"441f7f6f-8c00-4ae7-a970-b199a5d94c55\") " pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:27 crc kubenswrapper[5029]: I0313 20:49:27.798741 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:28 crc kubenswrapper[5029]: I0313 20:49:28.207040 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" event={"ID":"7975f817-5324-4ea2-9f48-7d83b39c2fab","Type":"ContainerStarted","Data":"2edaf5b9089e59b6d44d1fd77a3f9fcc1474c8cc993543806e720980ff1462fc"} Mar 13 20:49:28 crc kubenswrapper[5029]: I0313 20:49:28.210184 5029 generic.go:334] "Generic (PLEG): container finished" podID="e5a13c03-b012-4416-bb5b-3ff21417290a" containerID="bb6621cafaff21691e905a7a332368bcd78b04c74bea53ae656f96b343a4c154" exitCode=0 Mar 13 20:49:28 crc kubenswrapper[5029]: I0313 20:49:28.210309 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xhhzb" event={"ID":"e5a13c03-b012-4416-bb5b-3ff21417290a","Type":"ContainerDied","Data":"bb6621cafaff21691e905a7a332368bcd78b04c74bea53ae656f96b343a4c154"} Mar 13 20:49:28 crc kubenswrapper[5029]: I0313 20:49:28.215439 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bd9d96d9f-z6km7" event={"ID":"f2251506-8d45-43fb-b88b-3fc76a486e60","Type":"ContainerStarted","Data":"532baf90e02dc4d5b9aa9702574728f30aebe9086eeaaaa72a6b46d0a44a92b5"} Mar 13 20:49:28 crc kubenswrapper[5029]: I0313 20:49:28.403667 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57db7d86f6-rjplz"] Mar 13 20:49:28 crc kubenswrapper[5029]: W0313 20:49:28.412764 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod441f7f6f_8c00_4ae7_a970_b199a5d94c55.slice/crio-44ef7e60ddb5a108a784a824160fe21c421fccacafb73d48ef825d05e991853b WatchSource:0}: Error finding container 44ef7e60ddb5a108a784a824160fe21c421fccacafb73d48ef825d05e991853b: Status 404 returned error can't find the container with id 44ef7e60ddb5a108a784a824160fe21c421fccacafb73d48ef825d05e991853b Mar 13 20:49:28 crc kubenswrapper[5029]: I0313 20:49:28.942101 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.251910 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57db7d86f6-rjplz" event={"ID":"441f7f6f-8c00-4ae7-a970-b199a5d94c55","Type":"ContainerStarted","Data":"31388d4b577ba99adb5a7942a2d3c659d783cb89da43a39a20f1fc34f411eab6"} Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.252480 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57db7d86f6-rjplz" event={"ID":"441f7f6f-8c00-4ae7-a970-b199a5d94c55","Type":"ContainerStarted","Data":"4ef3b247859b360c9dab5064c864afc14d9748d706252c09c02009b93770b94b"} Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.252494 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57db7d86f6-rjplz" event={"ID":"441f7f6f-8c00-4ae7-a970-b199a5d94c55","Type":"ContainerStarted","Data":"44ef7e60ddb5a108a784a824160fe21c421fccacafb73d48ef825d05e991853b"} Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.264131 5029 generic.go:334] "Generic (PLEG): container finished" podID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerID="ce4d5f28881921a2b6665cdede7f8a30d9b32d41ef9ab8c3708eadcc24b06e0c" exitCode=0 Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.264220 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"160773c1-ebe6-4b3b-b26d-5745cbf9ef70","Type":"ContainerDied","Data":"ce4d5f28881921a2b6665cdede7f8a30d9b32d41ef9ab8c3708eadcc24b06e0c"} Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.264269 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"160773c1-ebe6-4b3b-b26d-5745cbf9ef70","Type":"ContainerDied","Data":"026e69edd8af455c0839b69cce479b743095fd7f732c9854a6623864a6528622"} Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.264285 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="026e69edd8af455c0839b69cce479b743095fd7f732c9854a6623864a6528622" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.271823 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bd9d96d9f-z6km7" event={"ID":"f2251506-8d45-43fb-b88b-3fc76a486e60","Type":"ContainerStarted","Data":"dc10cd93973f96637e836008bdeac1aaf1c4b9250d2f31a6d70076576b7189d7"} Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.283686 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" event={"ID":"7975f817-5324-4ea2-9f48-7d83b39c2fab","Type":"ContainerStarted","Data":"d1c92465ce7459d4aa957c9168169288ff1573ec0180b9d78ef03c115d184dc4"} Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.306861 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85f64689c7-r5skz"] Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.307116 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85f64689c7-r5skz" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-api" containerID="cri-o://0085d1eb1604fb4548e9e43461f8bc7bf9ee54b12f800dfa9ff3c16b36293694" gracePeriod=30 Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.307588 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85f64689c7-r5skz" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-httpd" containerID="cri-o://9fe692e61daa725a8bab8b13b6b6cd4a995542200a130715e3a5c4e3968886eb" gracePeriod=30 Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.323810 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5bd9d96d9f-z6km7" podStartSLOduration=2.968897144 podStartE2EDuration="5.323763848s" podCreationTimestamp="2026-03-13 20:49:24 +0000 UTC" firstStartedPulling="2026-03-13 20:49:25.429269834 +0000 UTC m=+1325.445352237" lastFinishedPulling="2026-03-13 20:49:27.784136538 +0000 UTC m=+1327.800218941" observedRunningTime="2026-03-13 20:49:29.320960561 +0000 UTC m=+1329.337042984" watchObservedRunningTime="2026-03-13 20:49:29.323763848 +0000 UTC m=+1329.339846251" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.363737 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.378945 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d875c8b5-6tdfp"] Mar 13 20:49:29 crc kubenswrapper[5029]: E0313 20:49:29.379411 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="sg-core" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.379427 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="sg-core" Mar 13 20:49:29 crc kubenswrapper[5029]: E0313 20:49:29.379448 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="proxy-httpd" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.379455 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="proxy-httpd" Mar 13 20:49:29 crc kubenswrapper[5029]: E0313 20:49:29.379465 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="ceilometer-notification-agent" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.379471 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="ceilometer-notification-agent" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.379722 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="ceilometer-notification-agent" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.379736 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="sg-core" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.379751 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" containerName="proxy-httpd" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.380728 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.404435 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d875c8b5-6tdfp"] Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.435524 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65fd679d74-klxb9" podStartSLOduration=3.236094764 podStartE2EDuration="5.435498686s" podCreationTimestamp="2026-03-13 20:49:24 +0000 UTC" firstStartedPulling="2026-03-13 20:49:25.582639139 +0000 UTC m=+1325.598721542" lastFinishedPulling="2026-03-13 20:49:27.782043061 +0000 UTC m=+1327.798125464" observedRunningTime="2026-03-13 20:49:29.356687575 +0000 UTC m=+1329.372769988" watchObservedRunningTime="2026-03-13 20:49:29.435498686 +0000 UTC m=+1329.451581089" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.562582 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-scripts\") pod \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.562674 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-sg-core-conf-yaml\") pod \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.562763 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-log-httpd\") pod \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.562897 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-combined-ca-bundle\") pod \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.562915 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-run-httpd\") pod \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.562944 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-config-data\") pod \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.562963 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpbb2\" (UniqueName: \"kubernetes.io/projected/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-kube-api-access-bpbb2\") pod \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\" (UID: \"160773c1-ebe6-4b3b-b26d-5745cbf9ef70\") " Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.563226 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-ovndb-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.563250 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-httpd-config\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.563296 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-public-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.563325 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-combined-ca-bundle\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.563380 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-internal-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.563425 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9xr\" (UniqueName: \"kubernetes.io/projected/2049789d-643f-478a-8c68-c0ab07e8a3a3-kube-api-access-cf9xr\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.563445 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-config\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.567706 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "160773c1-ebe6-4b3b-b26d-5745cbf9ef70" (UID: "160773c1-ebe6-4b3b-b26d-5745cbf9ef70"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.568018 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "160773c1-ebe6-4b3b-b26d-5745cbf9ef70" (UID: "160773c1-ebe6-4b3b-b26d-5745cbf9ef70"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.577318 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-scripts" (OuterVolumeSpecName: "scripts") pod "160773c1-ebe6-4b3b-b26d-5745cbf9ef70" (UID: "160773c1-ebe6-4b3b-b26d-5745cbf9ef70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.593792 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-kube-api-access-bpbb2" (OuterVolumeSpecName: "kube-api-access-bpbb2") pod "160773c1-ebe6-4b3b-b26d-5745cbf9ef70" (UID: "160773c1-ebe6-4b3b-b26d-5745cbf9ef70"). InnerVolumeSpecName "kube-api-access-bpbb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.606167 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "160773c1-ebe6-4b3b-b26d-5745cbf9ef70" (UID: "160773c1-ebe6-4b3b-b26d-5745cbf9ef70"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.665598 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9xr\" (UniqueName: \"kubernetes.io/projected/2049789d-643f-478a-8c68-c0ab07e8a3a3-kube-api-access-cf9xr\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666216 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-config\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666283 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-ovndb-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666306 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-httpd-config\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666365 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-public-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666420 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-combined-ca-bundle\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666484 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-internal-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666539 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666555 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666567 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpbb2\" (UniqueName: \"kubernetes.io/projected/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-kube-api-access-bpbb2\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666578 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.666589 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.683031 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-85f64689c7-r5skz" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.162:9696/\": read tcp 10.217.0.2:52994->10.217.0.162:9696: read: connection reset by peer" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.690698 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-internal-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.696751 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-httpd-config\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.697456 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-ovndb-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.702032 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9xr\" (UniqueName: \"kubernetes.io/projected/2049789d-643f-478a-8c68-c0ab07e8a3a3-kube-api-access-cf9xr\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.707293 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-public-tls-certs\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.707641 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-config\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.712429 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2049789d-643f-478a-8c68-c0ab07e8a3a3-combined-ca-bundle\") pod \"neutron-6d875c8b5-6tdfp\" (UID: \"2049789d-643f-478a-8c68-c0ab07e8a3a3\") " pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.722681 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "160773c1-ebe6-4b3b-b26d-5745cbf9ef70" (UID: "160773c1-ebe6-4b3b-b26d-5745cbf9ef70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.732153 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.741023 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-config-data" (OuterVolumeSpecName: "config-data") pod "160773c1-ebe6-4b3b-b26d-5745cbf9ef70" (UID: "160773c1-ebe6-4b3b-b26d-5745cbf9ef70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.768300 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.768353 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160773c1-ebe6-4b3b-b26d-5745cbf9ef70-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.908411 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:49:29 crc kubenswrapper[5029]: I0313 20:49:29.994129 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-scripts\") pod \"e5a13c03-b012-4416-bb5b-3ff21417290a\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.009144 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-scripts" (OuterVolumeSpecName: "scripts") pod "e5a13c03-b012-4416-bb5b-3ff21417290a" (UID: "e5a13c03-b012-4416-bb5b-3ff21417290a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.097793 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a13c03-b012-4416-bb5b-3ff21417290a-etc-machine-id\") pod \"e5a13c03-b012-4416-bb5b-3ff21417290a\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.097992 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-db-sync-config-data\") pod \"e5a13c03-b012-4416-bb5b-3ff21417290a\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.097980 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5a13c03-b012-4416-bb5b-3ff21417290a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5a13c03-b012-4416-bb5b-3ff21417290a" (UID: "e5a13c03-b012-4416-bb5b-3ff21417290a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.098052 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-combined-ca-bundle\") pod \"e5a13c03-b012-4416-bb5b-3ff21417290a\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.098135 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9plv\" (UniqueName: \"kubernetes.io/projected/e5a13c03-b012-4416-bb5b-3ff21417290a-kube-api-access-x9plv\") pod \"e5a13c03-b012-4416-bb5b-3ff21417290a\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.098229 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-config-data\") pod \"e5a13c03-b012-4416-bb5b-3ff21417290a\" (UID: \"e5a13c03-b012-4416-bb5b-3ff21417290a\") " Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.098965 5029 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a13c03-b012-4416-bb5b-3ff21417290a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.098989 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.102806 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e5a13c03-b012-4416-bb5b-3ff21417290a" (UID: "e5a13c03-b012-4416-bb5b-3ff21417290a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.111115 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a13c03-b012-4416-bb5b-3ff21417290a-kube-api-access-x9plv" (OuterVolumeSpecName: "kube-api-access-x9plv") pod "e5a13c03-b012-4416-bb5b-3ff21417290a" (UID: "e5a13c03-b012-4416-bb5b-3ff21417290a"). InnerVolumeSpecName "kube-api-access-x9plv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.150009 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a13c03-b012-4416-bb5b-3ff21417290a" (UID: "e5a13c03-b012-4416-bb5b-3ff21417290a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.177003 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-config-data" (OuterVolumeSpecName: "config-data") pod "e5a13c03-b012-4416-bb5b-3ff21417290a" (UID: "e5a13c03-b012-4416-bb5b-3ff21417290a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.201255 5029 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.201296 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.201306 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9plv\" (UniqueName: \"kubernetes.io/projected/e5a13c03-b012-4416-bb5b-3ff21417290a-kube-api-access-x9plv\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.201318 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a13c03-b012-4416-bb5b-3ff21417290a-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.293974 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xhhzb" event={"ID":"e5a13c03-b012-4416-bb5b-3ff21417290a","Type":"ContainerDied","Data":"17c8280991d15e962bd2f31bc6c3678400794de8ec3a7bd2bd7786be512920e7"} Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.294007 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xhhzb" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.294021 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c8280991d15e962bd2f31bc6c3678400794de8ec3a7bd2bd7786be512920e7" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.296576 5029 generic.go:334] "Generic (PLEG): container finished" podID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerID="9fe692e61daa725a8bab8b13b6b6cd4a995542200a130715e3a5c4e3968886eb" exitCode=0 Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.296623 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85f64689c7-r5skz" event={"ID":"bb83b759-9e8e-4e99-8193-f8dbf847f440","Type":"ContainerDied","Data":"9fe692e61daa725a8bab8b13b6b6cd4a995542200a130715e3a5c4e3968886eb"} Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.296824 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.326495 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57db7d86f6-rjplz" podStartSLOduration=3.326476626 podStartE2EDuration="3.326476626s" podCreationTimestamp="2026-03-13 20:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:30.321657095 +0000 UTC m=+1330.337739498" watchObservedRunningTime="2026-03-13 20:49:30.326476626 +0000 UTC m=+1330.342559029" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.416610 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.439921 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.457040 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: E0313 20:49:30.462366 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a13c03-b012-4416-bb5b-3ff21417290a" containerName="cinder-db-sync" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.462388 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a13c03-b012-4416-bb5b-3ff21417290a" containerName="cinder-db-sync" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.462559 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a13c03-b012-4416-bb5b-3ff21417290a" containerName="cinder-db-sync" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.464208 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.469397 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.469633 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.479798 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.652873 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-run-httpd\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.656279 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.657660 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-scripts\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.657768 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-log-httpd\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.667508 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.667637 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-config-data\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.667683 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvdh\" (UniqueName: \"kubernetes.io/projected/d1917286-7b0a-46c8-a296-fab758373bc5-kube-api-access-4lvdh\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.681491 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160773c1-ebe6-4b3b-b26d-5745cbf9ef70" path="/var/lib/kubelet/pods/160773c1-ebe6-4b3b-b26d-5745cbf9ef70/volumes" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.690829 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d875c8b5-6tdfp"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.700699 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.703179 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.720260 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.720544 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vvsl5" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.720752 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.720930 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.741607 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.767222 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-242b6"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.767527 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-242b6" podUID="af00eab7-5ce5-4058-8328-631a7103290c" containerName="dnsmasq-dns" containerID="cri-o://3a02b91389bdfa8cb01c5619ffbaba17d3e9b2a3bcd1946cbf442e711f276ecc" gracePeriod=10 Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.781903 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8422f550-8545-46cd-a310-a70b28a4f7cd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.781990 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-run-httpd\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782056 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782120 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-scripts\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782189 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-log-httpd\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782394 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782428 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782473 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tp5l\" (UniqueName: \"kubernetes.io/projected/8422f550-8545-46cd-a310-a70b28a4f7cd-kube-api-access-5tp5l\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782528 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782554 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-scripts\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782579 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782609 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-config-data\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.782642 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lvdh\" (UniqueName: \"kubernetes.io/projected/d1917286-7b0a-46c8-a296-fab758373bc5-kube-api-access-4lvdh\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.784465 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-run-httpd\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.794368 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-log-httpd\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.799056 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.801126 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.806003 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.818333 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.818666 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.819191 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-scripts\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.819568 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-config-data\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.840117 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.841945 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.844624 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.845076 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lvdh\" (UniqueName: \"kubernetes.io/projected/d1917286-7b0a-46c8-a296-fab758373bc5-kube-api-access-4lvdh\") pod \"ceilometer-0\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " pod="openstack/ceilometer-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.853651 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.894655 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.895438 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.895926 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.896746 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-ceph\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.896908 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.897243 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdnl5\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-kube-api-access-gdnl5\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.897535 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.897744 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.897881 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898081 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898210 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898323 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898422 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-scripts\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898544 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898672 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898763 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898869 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.898947 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-lib-modules\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.899014 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-run\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.899075 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.899155 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-run\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.899292 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.899408 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tp5l\" (UniqueName: \"kubernetes.io/projected/8422f550-8545-46cd-a310-a70b28a4f7cd-kube-api-access-5tp5l\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.899513 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.899629 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.899735 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvntl\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-kube-api-access-tvntl\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.900224 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-scripts\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.900343 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.900426 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.900515 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.900623 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.900695 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.900838 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.900945 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.901073 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8422f550-8545-46cd-a310-a70b28a4f7cd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.901165 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-dev\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.901259 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-sys\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.901337 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.902450 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8422f550-8545-46cd-a310-a70b28a4f7cd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.908550 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.912841 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.914563 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.917366 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-scripts\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.921284 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2st7k"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.923525 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.925467 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-85f64689c7-r5skz" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.162:9696/\": dial tcp 10.217.0.162:9696: connect: connection refused" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.932599 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tp5l\" (UniqueName: \"kubernetes.io/projected/8422f550-8545-46cd-a310-a70b28a4f7cd-kube-api-access-5tp5l\") pod \"cinder-scheduler-0\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.974907 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:30 crc kubenswrapper[5029]: I0313 20:49:30.987029 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2st7k"] Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.003350 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.003626 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.003749 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.003843 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-scripts\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.003958 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004071 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9m88\" (UniqueName: \"kubernetes.io/projected/ba0d605d-3a66-4020-8ed6-8d069d055766-kube-api-access-c9m88\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004185 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004286 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-lib-modules\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004379 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-run\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004465 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004539 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004659 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-run\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004752 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004882 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.004987 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005097 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvntl\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-kube-api-access-tvntl\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005217 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005281 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005370 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-run\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005404 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005433 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-lib-modules\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005455 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-run\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005585 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005768 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.005879 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.006017 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.006188 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.006317 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.006416 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.006524 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-dev\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.006626 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-sys\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.006733 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.006829 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.007002 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.007156 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.007276 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.007388 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-ceph\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.007496 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.007624 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdnl5\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-kube-api-access-gdnl5\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.007722 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.007819 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.010765 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.010983 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.011113 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.011235 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-config\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.011571 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.014734 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.014838 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.015783 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.016034 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.016081 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.016128 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-dev\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.016153 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-sys\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.016658 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.019600 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.003961 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.024995 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.025132 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.025190 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.025561 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.025693 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-scripts\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.028570 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.029349 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-ceph\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.046384 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.047203 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.052211 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdnl5\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-kube-api-access-gdnl5\") pod \"cinder-backup-0\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.053135 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.053834 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.054645 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.060355 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvntl\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-kube-api-access-tvntl\") pod \"cinder-volume-volume1-0\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.070542 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.077963 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.086378 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.091051 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.101648 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.113671 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.113742 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-config\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.113802 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9m88\" (UniqueName: \"kubernetes.io/projected/ba0d605d-3a66-4020-8ed6-8d069d055766-kube-api-access-c9m88\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.117573 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.117692 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.117766 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.119070 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.120168 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.122052 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-config\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.127493 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.138624 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.156284 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9m88\" (UniqueName: \"kubernetes.io/projected/ba0d605d-3a66-4020-8ed6-8d069d055766-kube-api-access-c9m88\") pod \"dnsmasq-dns-5c9776ccc5-2st7k\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.218924 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.227337 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-scripts\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.227381 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16129875-de71-41c7-8c75-17a279ded4b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.227415 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16129875-de71-41c7-8c75-17a279ded4b3-logs\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.227509 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.227540 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.227635 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6smh8\" (UniqueName: \"kubernetes.io/projected/16129875-de71-41c7-8c75-17a279ded4b3-kube-api-access-6smh8\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.227651 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.258909 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.271464 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.293052 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.320140 5029 generic.go:334] "Generic (PLEG): container finished" podID="af00eab7-5ce5-4058-8328-631a7103290c" containerID="3a02b91389bdfa8cb01c5619ffbaba17d3e9b2a3bcd1946cbf442e711f276ecc" exitCode=0 Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.320601 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-242b6" event={"ID":"af00eab7-5ce5-4058-8328-631a7103290c","Type":"ContainerDied","Data":"3a02b91389bdfa8cb01c5619ffbaba17d3e9b2a3bcd1946cbf442e711f276ecc"} Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.322576 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d875c8b5-6tdfp" event={"ID":"2049789d-643f-478a-8c68-c0ab07e8a3a3","Type":"ContainerStarted","Data":"d1db746aa741848e2c262ceafe036891c74be159f335c8da16987abb7dafdc17"} Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.331142 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6smh8\" (UniqueName: \"kubernetes.io/projected/16129875-de71-41c7-8c75-17a279ded4b3-kube-api-access-6smh8\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.331196 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.331349 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-scripts\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.331378 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16129875-de71-41c7-8c75-17a279ded4b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.331435 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16129875-de71-41c7-8c75-17a279ded4b3-logs\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.331599 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.331635 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.336737 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16129875-de71-41c7-8c75-17a279ded4b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.347006 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.347648 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.351493 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16129875-de71-41c7-8c75-17a279ded4b3-logs\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.363120 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.371598 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-scripts\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.376843 5029 generic.go:334] "Generic (PLEG): container finished" podID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerID="0085d1eb1604fb4548e9e43461f8bc7bf9ee54b12f800dfa9ff3c16b36293694" exitCode=0 Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.376928 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85f64689c7-r5skz" event={"ID":"bb83b759-9e8e-4e99-8193-f8dbf847f440","Type":"ContainerDied","Data":"0085d1eb1604fb4548e9e43461f8bc7bf9ee54b12f800dfa9ff3c16b36293694"} Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.377279 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6smh8\" (UniqueName: \"kubernetes.io/projected/16129875-de71-41c7-8c75-17a279ded4b3-kube-api-access-6smh8\") pod \"cinder-api-0\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.558483 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.616974 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.636195 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-nb\") pod \"af00eab7-5ce5-4058-8328-631a7103290c\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.636250 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fljbx\" (UniqueName: \"kubernetes.io/projected/af00eab7-5ce5-4058-8328-631a7103290c-kube-api-access-fljbx\") pod \"af00eab7-5ce5-4058-8328-631a7103290c\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.636340 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-svc\") pod \"af00eab7-5ce5-4058-8328-631a7103290c\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.636384 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-config\") pod \"af00eab7-5ce5-4058-8328-631a7103290c\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.636464 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-swift-storage-0\") pod \"af00eab7-5ce5-4058-8328-631a7103290c\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.636491 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-sb\") pod \"af00eab7-5ce5-4058-8328-631a7103290c\" (UID: \"af00eab7-5ce5-4058-8328-631a7103290c\") " Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.666596 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af00eab7-5ce5-4058-8328-631a7103290c-kube-api-access-fljbx" (OuterVolumeSpecName: "kube-api-access-fljbx") pod "af00eab7-5ce5-4058-8328-631a7103290c" (UID: "af00eab7-5ce5-4058-8328-631a7103290c"). InnerVolumeSpecName "kube-api-access-fljbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.750363 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fljbx\" (UniqueName: \"kubernetes.io/projected/af00eab7-5ce5-4058-8328-631a7103290c-kube-api-access-fljbx\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.816331 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af00eab7-5ce5-4058-8328-631a7103290c" (UID: "af00eab7-5ce5-4058-8328-631a7103290c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.856175 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-config" (OuterVolumeSpecName: "config") pod "af00eab7-5ce5-4058-8328-631a7103290c" (UID: "af00eab7-5ce5-4058-8328-631a7103290c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.857196 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.857241 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.865373 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "af00eab7-5ce5-4058-8328-631a7103290c" (UID: "af00eab7-5ce5-4058-8328-631a7103290c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.901500 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af00eab7-5ce5-4058-8328-631a7103290c" (UID: "af00eab7-5ce5-4058-8328-631a7103290c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.910913 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af00eab7-5ce5-4058-8328-631a7103290c" (UID: "af00eab7-5ce5-4058-8328-631a7103290c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.960637 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.960678 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.960688 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af00eab7-5ce5-4058-8328-631a7103290c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.965205 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:49:31 crc kubenswrapper[5029]: I0313 20:49:31.965291 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.035446 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.051657 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:32 crc kubenswrapper[5029]: W0313 20:49:32.052218 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1917286_7b0a_46c8_a296_fab758373bc5.slice/crio-39d62eb099cd48da730bbb44f19a389cc783684a5ef07b5d39efe8edddec8a30 WatchSource:0}: Error finding container 39d62eb099cd48da730bbb44f19a389cc783684a5ef07b5d39efe8edddec8a30: Status 404 returned error can't find the container with id 39d62eb099cd48da730bbb44f19a389cc783684a5ef07b5d39efe8edddec8a30 Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.164816 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-internal-tls-certs\") pod \"bb83b759-9e8e-4e99-8193-f8dbf847f440\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.165156 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvx47\" (UniqueName: \"kubernetes.io/projected/bb83b759-9e8e-4e99-8193-f8dbf847f440-kube-api-access-kvx47\") pod \"bb83b759-9e8e-4e99-8193-f8dbf847f440\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.165216 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-config\") pod \"bb83b759-9e8e-4e99-8193-f8dbf847f440\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.165266 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-combined-ca-bundle\") pod \"bb83b759-9e8e-4e99-8193-f8dbf847f440\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.165295 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-httpd-config\") pod \"bb83b759-9e8e-4e99-8193-f8dbf847f440\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.165363 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-ovndb-tls-certs\") pod \"bb83b759-9e8e-4e99-8193-f8dbf847f440\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.165438 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-public-tls-certs\") pod \"bb83b759-9e8e-4e99-8193-f8dbf847f440\" (UID: \"bb83b759-9e8e-4e99-8193-f8dbf847f440\") " Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.175818 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb83b759-9e8e-4e99-8193-f8dbf847f440-kube-api-access-kvx47" (OuterVolumeSpecName: "kube-api-access-kvx47") pod "bb83b759-9e8e-4e99-8193-f8dbf847f440" (UID: "bb83b759-9e8e-4e99-8193-f8dbf847f440"). InnerVolumeSpecName "kube-api-access-kvx47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.192318 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bb83b759-9e8e-4e99-8193-f8dbf847f440" (UID: "bb83b759-9e8e-4e99-8193-f8dbf847f440"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:32 crc kubenswrapper[5029]: W0313 20:49:32.259293 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8422f550_8545_46cd_a310_a70b28a4f7cd.slice/crio-97e92e4907de31c850bb05768163193f3e7ceed0ff2ec33fb1cd6a7fdb0dd301 WatchSource:0}: Error finding container 97e92e4907de31c850bb05768163193f3e7ceed0ff2ec33fb1cd6a7fdb0dd301: Status 404 returned error can't find the container with id 97e92e4907de31c850bb05768163193f3e7ceed0ff2ec33fb1cd6a7fdb0dd301 Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.268288 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvx47\" (UniqueName: \"kubernetes.io/projected/bb83b759-9e8e-4e99-8193-f8dbf847f440-kube-api-access-kvx47\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.268330 5029 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.284574 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.310074 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb83b759-9e8e-4e99-8193-f8dbf847f440" (UID: "bb83b759-9e8e-4e99-8193-f8dbf847f440"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.328069 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-config" (OuterVolumeSpecName: "config") pod "bb83b759-9e8e-4e99-8193-f8dbf847f440" (UID: "bb83b759-9e8e-4e99-8193-f8dbf847f440"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.342790 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2st7k"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.358114 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bb83b759-9e8e-4e99-8193-f8dbf847f440" (UID: "bb83b759-9e8e-4e99-8193-f8dbf847f440"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.362519 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb83b759-9e8e-4e99-8193-f8dbf847f440" (UID: "bb83b759-9e8e-4e99-8193-f8dbf847f440"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.372294 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.372342 5029 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.372357 5029 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.372367 5029 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.375064 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.379053 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb83b759-9e8e-4e99-8193-f8dbf847f440" (UID: "bb83b759-9e8e-4e99-8193-f8dbf847f440"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.427395 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-242b6" event={"ID":"af00eab7-5ce5-4058-8328-631a7103290c","Type":"ContainerDied","Data":"0775a3b871994edc4bf0e65d3a54f96ec4f09076bcc5666cbe455214ab7595b3"} Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.427458 5029 scope.go:117] "RemoveContainer" containerID="3a02b91389bdfa8cb01c5619ffbaba17d3e9b2a3bcd1946cbf442e711f276ecc" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.427733 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-242b6" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.446947 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d875c8b5-6tdfp" event={"ID":"2049789d-643f-478a-8c68-c0ab07e8a3a3","Type":"ContainerStarted","Data":"b60b67be26c0d24d94962f2bbbba28541b4592f7cc100ca12317e94ccd406095"} Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.447009 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d875c8b5-6tdfp" event={"ID":"2049789d-643f-478a-8c68-c0ab07e8a3a3","Type":"ContainerStarted","Data":"eae98112f9dc8d29c40ac552060465f55a97dac2bf741f326d34dfcdcd2a4ecb"} Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.447236 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.455561 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerStarted","Data":"39d62eb099cd48da730bbb44f19a389cc783684a5ef07b5d39efe8edddec8a30"} Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.475289 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b759-9e8e-4e99-8193-f8dbf847f440-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.487967 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d875c8b5-6tdfp" podStartSLOduration=3.487941333 podStartE2EDuration="3.487941333s" podCreationTimestamp="2026-03-13 20:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:32.482224727 +0000 UTC m=+1332.498307160" watchObservedRunningTime="2026-03-13 20:49:32.487941333 +0000 UTC m=+1332.504023736" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.502814 5029 scope.go:117] "RemoveContainer" containerID="53501a280e8a464e8f1815d471da1f16e010c6999a7685e0fb1eaade94e82a70" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.503273 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85f64689c7-r5skz" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.507145 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85f64689c7-r5skz" event={"ID":"bb83b759-9e8e-4e99-8193-f8dbf847f440","Type":"ContainerDied","Data":"35453afda79e05c40c688fbf603711f9b08e9544185499f555f3bf2f1d350cb6"} Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.521094 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"793360a3-2e62-4dfb-b69e-69ffd41f8ed1","Type":"ContainerStarted","Data":"2acef9587905f19e7f16d029b1f33d67d474ac34252ec2d6c0c64c2653ec56df"} Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.522784 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" event={"ID":"ba0d605d-3a66-4020-8ed6-8d069d055766","Type":"ContainerStarted","Data":"ef61f9ac1cbe94d052c315c49338ce8aea77955e1f57b049ab7d04f4c9d021a6"} Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.524079 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8422f550-8545-46cd-a310-a70b28a4f7cd","Type":"ContainerStarted","Data":"97e92e4907de31c850bb05768163193f3e7ceed0ff2ec33fb1cd6a7fdb0dd301"} Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.541149 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-242b6"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.559720 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-242b6"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.574663 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.643843 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af00eab7-5ce5-4058-8328-631a7103290c" path="/var/lib/kubelet/pods/af00eab7-5ce5-4058-8328-631a7103290c/volumes" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.644554 5029 scope.go:117] "RemoveContainer" containerID="9fe692e61daa725a8bab8b13b6b6cd4a995542200a130715e3a5c4e3968886eb" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.655298 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85f64689c7-r5skz"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.659025 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85f64689c7-r5skz"] Mar 13 20:49:32 crc kubenswrapper[5029]: W0313 20:49:32.663877 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod853d3135_6a6d_4d6c_a56e_1afe15771cdc.slice/crio-54a59851e7ca483e05dd3798543822970230a8bec65c8fcdc72bdb58f3179dc7 WatchSource:0}: Error finding container 54a59851e7ca483e05dd3798543822970230a8bec65c8fcdc72bdb58f3179dc7: Status 404 returned error can't find the container with id 54a59851e7ca483e05dd3798543822970230a8bec65c8fcdc72bdb58f3179dc7 Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.695328 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.711349 5029 scope.go:117] "RemoveContainer" containerID="0085d1eb1604fb4548e9e43461f8bc7bf9ee54b12f800dfa9ff3c16b36293694" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.799875 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:32 crc kubenswrapper[5029]: I0313 20:49:32.801060 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:33 crc kubenswrapper[5029]: I0313 20:49:33.548684 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16129875-de71-41c7-8c75-17a279ded4b3","Type":"ContainerStarted","Data":"45b58aa69edc5e33cb8fb7a9bdf0eab5993df519135f2223463872501fad3d31"} Mar 13 20:49:33 crc kubenswrapper[5029]: I0313 20:49:33.549025 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16129875-de71-41c7-8c75-17a279ded4b3","Type":"ContainerStarted","Data":"d552748531ba5cfda4eb07732b3b00790763c9bc4dc06536e299bb65d1f3d829"} Mar 13 20:49:33 crc kubenswrapper[5029]: I0313 20:49:33.558949 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerStarted","Data":"4345c598e15dc4d1cc25b892f3080074163d3b543b379f0e51f46a057110cf45"} Mar 13 20:49:33 crc kubenswrapper[5029]: I0313 20:49:33.566324 5029 generic.go:334] "Generic (PLEG): container finished" podID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerID="be0714921822d8c8d103761f0fe732fffd1ecfb42c68f947049ee12e76dd439d" exitCode=0 Mar 13 20:49:33 crc kubenswrapper[5029]: I0313 20:49:33.566422 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" event={"ID":"ba0d605d-3a66-4020-8ed6-8d069d055766","Type":"ContainerDied","Data":"be0714921822d8c8d103761f0fe732fffd1ecfb42c68f947049ee12e76dd439d"} Mar 13 20:49:33 crc kubenswrapper[5029]: I0313 20:49:33.594582 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"853d3135-6a6d-4d6c-a56e-1afe15771cdc","Type":"ContainerStarted","Data":"54a59851e7ca483e05dd3798543822970230a8bec65c8fcdc72bdb58f3179dc7"} Mar 13 20:49:34 crc kubenswrapper[5029]: I0313 20:49:34.523929 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:49:34 crc kubenswrapper[5029]: I0313 20:49:34.645037 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" path="/var/lib/kubelet/pods/bb83b759-9e8e-4e99-8193-f8dbf847f440/volumes" Mar 13 20:49:34 crc kubenswrapper[5029]: I0313 20:49:34.646048 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8422f550-8545-46cd-a310-a70b28a4f7cd","Type":"ContainerStarted","Data":"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985"} Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.715176 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerStarted","Data":"fff6849dd1ebc77e403bc05cdb405ff112a79886b97960311b8416316fd3da19"} Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.721687 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"793360a3-2e62-4dfb-b69e-69ffd41f8ed1","Type":"ContainerStarted","Data":"5f905e99a4c17d2476e82af9f863ab9541615ee5641a74951e99b6663fa6748a"} Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.721730 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"793360a3-2e62-4dfb-b69e-69ffd41f8ed1","Type":"ContainerStarted","Data":"a77134a6664e8e03c5d65dd1fcf1c3d7de0906866a57ce5c2f0b4a504687d2d6"} Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.727811 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" event={"ID":"ba0d605d-3a66-4020-8ed6-8d069d055766","Type":"ContainerStarted","Data":"35d28eca8479127dd9fd4ce9e91f6f1e02894bb8007abfa0f7b9e9c00115d8bc"} Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.728461 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.730673 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"853d3135-6a6d-4d6c-a56e-1afe15771cdc","Type":"ContainerStarted","Data":"44ceb93dd233613450e5b2dac027163dd76b768e2be8839aaf172525a1547292"} Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.730693 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"853d3135-6a6d-4d6c-a56e-1afe15771cdc","Type":"ContainerStarted","Data":"cf7175d96d34dd5e8cafd59ef50ce0774de6072c72186454e3019d15a1943450"} Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.738584 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16129875-de71-41c7-8c75-17a279ded4b3","Type":"ContainerStarted","Data":"8e3b1a9c865bfb968583822bac1c553c7c415163692c70679545117119809d02"} Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.738736 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api-log" containerID="cri-o://45b58aa69edc5e33cb8fb7a9bdf0eab5993df519135f2223463872501fad3d31" gracePeriod=30 Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.740028 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.740091 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api" containerID="cri-o://8e3b1a9c865bfb968583822bac1c553c7c415163692c70679545117119809d02" gracePeriod=30 Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.756832 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.030314838 podStartE2EDuration="5.756812256s" podCreationTimestamp="2026-03-13 20:49:30 +0000 UTC" firstStartedPulling="2026-03-13 20:49:32.367518467 +0000 UTC m=+1332.383600870" lastFinishedPulling="2026-03-13 20:49:34.094015885 +0000 UTC m=+1334.110098288" observedRunningTime="2026-03-13 20:49:35.748491199 +0000 UTC m=+1335.764573612" watchObservedRunningTime="2026-03-13 20:49:35.756812256 +0000 UTC m=+1335.772894659" Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.781135 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.436153581 podStartE2EDuration="5.781119049s" podCreationTimestamp="2026-03-13 20:49:30 +0000 UTC" firstStartedPulling="2026-03-13 20:49:32.686172162 +0000 UTC m=+1332.702254565" lastFinishedPulling="2026-03-13 20:49:34.03113763 +0000 UTC m=+1334.047220033" observedRunningTime="2026-03-13 20:49:35.778260071 +0000 UTC m=+1335.794342474" watchObservedRunningTime="2026-03-13 20:49:35.781119049 +0000 UTC m=+1335.797201452" Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.808170 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" podStartSLOduration=5.808143026 podStartE2EDuration="5.808143026s" podCreationTimestamp="2026-03-13 20:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:35.800549119 +0000 UTC m=+1335.816631522" watchObservedRunningTime="2026-03-13 20:49:35.808143026 +0000 UTC m=+1335.824225429" Mar 13 20:49:35 crc kubenswrapper[5029]: I0313 20:49:35.828278 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.828261115 podStartE2EDuration="4.828261115s" podCreationTimestamp="2026-03-13 20:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:35.825337125 +0000 UTC m=+1335.841419528" watchObservedRunningTime="2026-03-13 20:49:35.828261115 +0000 UTC m=+1335.844343518" Mar 13 20:49:36 crc kubenswrapper[5029]: I0313 20:49:36.259933 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Mar 13 20:49:36 crc kubenswrapper[5029]: I0313 20:49:36.272759 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:36 crc kubenswrapper[5029]: I0313 20:49:36.368659 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:36 crc kubenswrapper[5029]: I0313 20:49:36.751580 5029 generic.go:334] "Generic (PLEG): container finished" podID="16129875-de71-41c7-8c75-17a279ded4b3" containerID="45b58aa69edc5e33cb8fb7a9bdf0eab5993df519135f2223463872501fad3d31" exitCode=143 Mar 13 20:49:36 crc kubenswrapper[5029]: I0313 20:49:36.752103 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16129875-de71-41c7-8c75-17a279ded4b3","Type":"ContainerDied","Data":"45b58aa69edc5e33cb8fb7a9bdf0eab5993df519135f2223463872501fad3d31"} Mar 13 20:49:36 crc kubenswrapper[5029]: I0313 20:49:36.755953 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerStarted","Data":"7deed472b980644a32d4a72d08df592db249ce7d8e5af3ce0022018125b1fcb7"} Mar 13 20:49:36 crc kubenswrapper[5029]: I0313 20:49:36.760504 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8422f550-8545-46cd-a310-a70b28a4f7cd","Type":"ContainerStarted","Data":"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290"} Mar 13 20:49:36 crc kubenswrapper[5029]: I0313 20:49:36.791455 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.947092657 podStartE2EDuration="6.791423525s" podCreationTimestamp="2026-03-13 20:49:30 +0000 UTC" firstStartedPulling="2026-03-13 20:49:32.270467709 +0000 UTC m=+1332.286550112" lastFinishedPulling="2026-03-13 20:49:33.114798577 +0000 UTC m=+1333.130880980" observedRunningTime="2026-03-13 20:49:36.786622894 +0000 UTC m=+1336.802705307" watchObservedRunningTime="2026-03-13 20:49:36.791423525 +0000 UTC m=+1336.807505928" Mar 13 20:49:37 crc kubenswrapper[5029]: I0313 20:49:37.775207 5029 generic.go:334] "Generic (PLEG): container finished" podID="e27175d1-38d4-4709-9d98-b71adc445f02" containerID="7861a777c5174a282b3807ef824a4d516283ea7280d71642c0d0819694271996" exitCode=0 Mar 13 20:49:37 crc kubenswrapper[5029]: I0313 20:49:37.777078 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-76l7z" event={"ID":"e27175d1-38d4-4709-9d98-b71adc445f02","Type":"ContainerDied","Data":"7861a777c5174a282b3807ef824a4d516283ea7280d71642c0d0819694271996"} Mar 13 20:49:37 crc kubenswrapper[5029]: I0313 20:49:37.807872 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:49:37 crc kubenswrapper[5029]: I0313 20:49:37.808200 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:38 crc kubenswrapper[5029]: I0313 20:49:38.089225 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:49:38 crc kubenswrapper[5029]: I0313 20:49:38.246157 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:38 crc kubenswrapper[5029]: I0313 20:49:38.741273 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57db7d86f6-rjplz" Mar 13 20:49:38 crc kubenswrapper[5029]: I0313 20:49:38.841622 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-567cdd7cd-vrz7b"] Mar 13 20:49:38 crc kubenswrapper[5029]: I0313 20:49:38.849620 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerStarted","Data":"1a8f30b2ad28824fb0ca56e6be9a0dc1f59afacd4b64b809aa301a484a6476fc"} Mar 13 20:49:38 crc kubenswrapper[5029]: I0313 20:49:38.850075 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 20:49:38 crc kubenswrapper[5029]: I0313 20:49:38.872993 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.385222565 podStartE2EDuration="8.872968581s" podCreationTimestamp="2026-03-13 20:49:30 +0000 UTC" firstStartedPulling="2026-03-13 20:49:32.061069086 +0000 UTC m=+1332.077151489" lastFinishedPulling="2026-03-13 20:49:37.548815102 +0000 UTC m=+1337.564897505" observedRunningTime="2026-03-13 20:49:38.870203916 +0000 UTC m=+1338.886286319" watchObservedRunningTime="2026-03-13 20:49:38.872968581 +0000 UTC m=+1338.889050984" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.603755 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-76l7z" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.706705 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjb78\" (UniqueName: \"kubernetes.io/projected/e27175d1-38d4-4709-9d98-b71adc445f02-kube-api-access-xjb78\") pod \"e27175d1-38d4-4709-9d98-b71adc445f02\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.706987 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-combined-ca-bundle\") pod \"e27175d1-38d4-4709-9d98-b71adc445f02\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.707151 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-job-config-data\") pod \"e27175d1-38d4-4709-9d98-b71adc445f02\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.707176 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-config-data\") pod \"e27175d1-38d4-4709-9d98-b71adc445f02\" (UID: \"e27175d1-38d4-4709-9d98-b71adc445f02\") " Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.724139 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27175d1-38d4-4709-9d98-b71adc445f02-kube-api-access-xjb78" (OuterVolumeSpecName: "kube-api-access-xjb78") pod "e27175d1-38d4-4709-9d98-b71adc445f02" (UID: "e27175d1-38d4-4709-9d98-b71adc445f02"). InnerVolumeSpecName "kube-api-access-xjb78". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.734721 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-config-data" (OuterVolumeSpecName: "config-data") pod "e27175d1-38d4-4709-9d98-b71adc445f02" (UID: "e27175d1-38d4-4709-9d98-b71adc445f02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.760306 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "e27175d1-38d4-4709-9d98-b71adc445f02" (UID: "e27175d1-38d4-4709-9d98-b71adc445f02"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.767199 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e27175d1-38d4-4709-9d98-b71adc445f02" (UID: "e27175d1-38d4-4709-9d98-b71adc445f02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.811279 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.811324 5029 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-job-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.811334 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27175d1-38d4-4709-9d98-b71adc445f02-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.811346 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjb78\" (UniqueName: \"kubernetes.io/projected/e27175d1-38d4-4709-9d98-b71adc445f02-kube-api-access-xjb78\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.879796 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-567cdd7cd-vrz7b" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api-log" containerID="cri-o://59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013" gracePeriod=30 Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.880009 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-76l7z" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.881568 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-76l7z" event={"ID":"e27175d1-38d4-4709-9d98-b71adc445f02","Type":"ContainerDied","Data":"3bf6a61572d4223f551e3c5cdb691e15a339647324a035da8b7bdc4a29420c3e"} Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.881624 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf6a61572d4223f551e3c5cdb691e15a339647324a035da8b7bdc4a29420c3e" Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.883507 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-567cdd7cd-vrz7b" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api" containerID="cri-o://30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb" gracePeriod=30 Mar 13 20:49:39 crc kubenswrapper[5029]: I0313 20:49:39.894182 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-567cdd7cd-vrz7b" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": EOF" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.164333 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:49:40 crc kubenswrapper[5029]: E0313 20:49:40.165091 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-httpd" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165111 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-httpd" Mar 13 20:49:40 crc kubenswrapper[5029]: E0313 20:49:40.165135 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af00eab7-5ce5-4058-8328-631a7103290c" containerName="dnsmasq-dns" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165143 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="af00eab7-5ce5-4058-8328-631a7103290c" containerName="dnsmasq-dns" Mar 13 20:49:40 crc kubenswrapper[5029]: E0313 20:49:40.165160 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af00eab7-5ce5-4058-8328-631a7103290c" containerName="init" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165166 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="af00eab7-5ce5-4058-8328-631a7103290c" containerName="init" Mar 13 20:49:40 crc kubenswrapper[5029]: E0313 20:49:40.165175 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27175d1-38d4-4709-9d98-b71adc445f02" containerName="manila-db-sync" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165181 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27175d1-38d4-4709-9d98-b71adc445f02" containerName="manila-db-sync" Mar 13 20:49:40 crc kubenswrapper[5029]: E0313 20:49:40.165192 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-api" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165197 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-api" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165362 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-httpd" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165375 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="af00eab7-5ce5-4058-8328-631a7103290c" containerName="dnsmasq-dns" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165385 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb83b759-9e8e-4e99-8193-f8dbf847f440" containerName="neutron-api" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.165406 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27175d1-38d4-4709-9d98-b71adc445f02" containerName="manila-db-sync" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.166300 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.178223 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.178270 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.178425 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-hr46l" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.178605 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.225538 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.225757 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.226879 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-scripts\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.226913 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.226966 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxrhl\" (UniqueName: \"kubernetes.io/projected/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-kube-api-access-bxrhl\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.226998 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.303174 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.306888 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.316406 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.340599 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.340696 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-scripts\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.340718 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.340760 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxrhl\" (UniqueName: \"kubernetes.io/projected/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-kube-api-access-bxrhl\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.340786 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.340888 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.348492 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.358885 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.364892 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.365004 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.383591 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-scripts\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.395277 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.412544 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxrhl\" (UniqueName: \"kubernetes.io/projected/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-kube-api-access-bxrhl\") pod \"manila-scheduler-0\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.448557 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.448612 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.448635 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-scripts\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.448678 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.448700 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.448729 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.450296 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxpt7\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-kube-api-access-bxpt7\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.450415 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-ceph\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.485156 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.507932 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2st7k"] Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.511190 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" podUID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerName="dnsmasq-dns" containerID="cri-o://35d28eca8479127dd9fd4ce9e91f6f1e02894bb8007abfa0f7b9e9c00115d8bc" gracePeriod=10 Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.519533 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.549652 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56696ff475-hrh96"] Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.562214 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.564155 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.564237 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.564291 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-scripts\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.564371 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.564415 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.564495 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.564643 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxpt7\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-kube-api-access-bxpt7\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.564792 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-ceph\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.568108 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.568403 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.587732 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-scripts\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.589798 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.593646 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.611519 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.612481 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-ceph\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.614356 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.668535 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxpt7\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-kube-api-access-bxpt7\") pod \"manila-share-share1-0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.684645 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.692255 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-svc\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.692350 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-config\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.692474 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-nb\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.692509 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t4pr\" (UniqueName: \"kubernetes.io/projected/21bfc307-8188-473c-8dc6-d24acb8f0694-kube-api-access-8t4pr\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.692605 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-sb\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.692833 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-swift-storage-0\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.724972 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-hrh96"] Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.765976 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.785971 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.791294 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.816213 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-sb\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.816489 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-swift-storage-0\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.816597 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-svc\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.816650 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-config\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.816750 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-nb\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.816777 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t4pr\" (UniqueName: \"kubernetes.io/projected/21bfc307-8188-473c-8dc6-d24acb8f0694-kube-api-access-8t4pr\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.817992 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-config\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.818460 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-nb\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.818744 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-swift-storage-0\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.818768 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-sb\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.825393 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-svc\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.825462 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.867654 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t4pr\" (UniqueName: \"kubernetes.io/projected/21bfc307-8188-473c-8dc6-d24acb8f0694-kube-api-access-8t4pr\") pod \"dnsmasq-dns-56696ff475-hrh96\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.918701 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10218fd0-afa5-4023-b0df-2f461de0260d-etc-machine-id\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.919117 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p25z\" (UniqueName: \"kubernetes.io/projected/10218fd0-afa5-4023-b0df-2f461de0260d-kube-api-access-7p25z\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.919139 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10218fd0-afa5-4023-b0df-2f461de0260d-logs\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.919160 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.919193 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-scripts\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.919213 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data-custom\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.919259 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.934213 5029 generic.go:334] "Generic (PLEG): container finished" podID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerID="35d28eca8479127dd9fd4ce9e91f6f1e02894bb8007abfa0f7b9e9c00115d8bc" exitCode=0 Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.934378 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" event={"ID":"ba0d605d-3a66-4020-8ed6-8d069d055766","Type":"ContainerDied","Data":"35d28eca8479127dd9fd4ce9e91f6f1e02894bb8007abfa0f7b9e9c00115d8bc"} Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.970228 5029 generic.go:334] "Generic (PLEG): container finished" podID="0827f1c5-a1b0-435f-9649-695e40413d18" containerID="59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013" exitCode=143 Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.971108 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-567cdd7cd-vrz7b" event={"ID":"0827f1c5-a1b0-435f-9649-695e40413d18","Type":"ContainerDied","Data":"59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013"} Mar 13 20:49:40 crc kubenswrapper[5029]: I0313 20:49:40.991453 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.003113 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.024622 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.024781 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10218fd0-afa5-4023-b0df-2f461de0260d-etc-machine-id\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.024830 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p25z\" (UniqueName: \"kubernetes.io/projected/10218fd0-afa5-4023-b0df-2f461de0260d-kube-api-access-7p25z\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.024866 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10218fd0-afa5-4023-b0df-2f461de0260d-logs\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.024889 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.024928 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-scripts\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.025134 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data-custom\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.030052 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10218fd0-afa5-4023-b0df-2f461de0260d-logs\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.030139 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10218fd0-afa5-4023-b0df-2f461de0260d-etc-machine-id\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.034516 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data-custom\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.035461 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.039509 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-scripts\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.090115 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.105991 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p25z\" (UniqueName: \"kubernetes.io/projected/10218fd0-afa5-4023-b0df-2f461de0260d-kube-api-access-7p25z\") pod \"manila-api-0\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.226895 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.296515 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" podUID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.399703 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.704792 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.753515 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.778355 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-svc\") pod \"ba0d605d-3a66-4020-8ed6-8d069d055766\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.778483 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-config\") pod \"ba0d605d-3a66-4020-8ed6-8d069d055766\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.778536 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-swift-storage-0\") pod \"ba0d605d-3a66-4020-8ed6-8d069d055766\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.778583 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-nb\") pod \"ba0d605d-3a66-4020-8ed6-8d069d055766\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.778639 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-sb\") pod \"ba0d605d-3a66-4020-8ed6-8d069d055766\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.781136 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-674bcdb76-8wx84" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.893429 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9m88\" (UniqueName: \"kubernetes.io/projected/ba0d605d-3a66-4020-8ed6-8d069d055766-kube-api-access-c9m88\") pod \"ba0d605d-3a66-4020-8ed6-8d069d055766\" (UID: \"ba0d605d-3a66-4020-8ed6-8d069d055766\") " Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.908600 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0d605d-3a66-4020-8ed6-8d069d055766-kube-api-access-c9m88" (OuterVolumeSpecName: "kube-api-access-c9m88") pod "ba0d605d-3a66-4020-8ed6-8d069d055766" (UID: "ba0d605d-3a66-4020-8ed6-8d069d055766"). InnerVolumeSpecName "kube-api-access-c9m88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.929941 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6c6bfdcb-59kpl"] Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.933705 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba0d605d-3a66-4020-8ed6-8d069d055766" (UID: "ba0d605d-3a66-4020-8ed6-8d069d055766"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.948453 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.963548 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ba0d605d-3a66-4020-8ed6-8d069d055766" (UID: "ba0d605d-3a66-4020-8ed6-8d069d055766"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:41 crc kubenswrapper[5029]: I0313 20:49:41.980141 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba0d605d-3a66-4020-8ed6-8d069d055766" (UID: "ba0d605d-3a66-4020-8ed6-8d069d055766"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.001422 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.001628 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2st7k" event={"ID":"ba0d605d-3a66-4020-8ed6-8d069d055766","Type":"ContainerDied","Data":"ef61f9ac1cbe94d052c315c49338ce8aea77955e1f57b049ab7d04f4c9d021a6"} Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.001660 5029 scope.go:117] "RemoveContainer" containerID="35d28eca8479127dd9fd4ce9e91f6f1e02894bb8007abfa0f7b9e9c00115d8bc" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.001819 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6c6bfdcb-59kpl" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon-log" containerID="cri-o://a6ab2709590ed237e109db82ee33f472cd645e5b45627f0cfb90ef7afafbc2dc" gracePeriod=30 Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.001920 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6c6bfdcb-59kpl" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" containerID="cri-o://f6af4dac6417db6513b5e2602d7469ad832100f41291a81205b348b878058d2d" gracePeriod=30 Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.009962 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba0d605d-3a66-4020-8ed6-8d069d055766" (UID: "ba0d605d-3a66-4020-8ed6-8d069d055766"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.014981 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.015014 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.015028 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.015039 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.015048 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9m88\" (UniqueName: \"kubernetes.io/projected/ba0d605d-3a66-4020-8ed6-8d069d055766-kube-api-access-c9m88\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.041666 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-config" (OuterVolumeSpecName: "config") pod "ba0d605d-3a66-4020-8ed6-8d069d055766" (UID: "ba0d605d-3a66-4020-8ed6-8d069d055766"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.088457 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.089037 5029 scope.go:117] "RemoveContainer" containerID="be0714921822d8c8d103761f0fe732fffd1ecfb42c68f947049ee12e76dd439d" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.126049 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0d605d-3a66-4020-8ed6-8d069d055766-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.128226 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.395111 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.410053 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-hrh96"] Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.419981 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2st7k"] Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.435115 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2st7k"] Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.485143 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.575341 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:42 crc kubenswrapper[5029]: I0313 20:49:42.618460 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba0d605d-3a66-4020-8ed6-8d069d055766" path="/var/lib/kubelet/pods/ba0d605d-3a66-4020-8ed6-8d069d055766/volumes" Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.054036 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"10218fd0-afa5-4023-b0df-2f461de0260d","Type":"ContainerStarted","Data":"aa2f2e18087bf1794658165ec0ebcb436386288d145c4b5a3439922209b33a7d"} Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.096016 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9813a0d8-78d8-41ea-a5af-b57454a8e0a0","Type":"ContainerStarted","Data":"83f6d3066522b2084ff367f04d2129ce71a1d2b162f74b318bcf603b0dbe752c"} Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.097211 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8","Type":"ContainerStarted","Data":"84b0bdf23587374e634fa02550e823eb25d9fb70a5ae5fb84d9fe52be727d114"} Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.099367 5029 generic.go:334] "Generic (PLEG): container finished" podID="21bfc307-8188-473c-8dc6-d24acb8f0694" containerID="9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c" exitCode=0 Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.099437 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-hrh96" event={"ID":"21bfc307-8188-473c-8dc6-d24acb8f0694","Type":"ContainerDied","Data":"9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c"} Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.099460 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-hrh96" event={"ID":"21bfc307-8188-473c-8dc6-d24acb8f0694","Type":"ContainerStarted","Data":"1fb3eaf5ddab8bf90f62cd933a5fbce8c855510466287159071178777bc773fe"} Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.157211 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerName="cinder-scheduler" containerID="cri-o://5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985" gracePeriod=30 Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.158127 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerName="probe" containerID="cri-o://43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290" gracePeriod=30 Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.613586 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.778238 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:43 crc kubenswrapper[5029]: I0313 20:49:43.792282 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.113999 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85c9b98d8-kzhp5"] Mar 13 20:49:44 crc kubenswrapper[5029]: E0313 20:49:44.115483 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerName="init" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.115501 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerName="init" Mar 13 20:49:44 crc kubenswrapper[5029]: E0313 20:49:44.115552 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerName="dnsmasq-dns" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.115636 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerName="dnsmasq-dns" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.116024 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba0d605d-3a66-4020-8ed6-8d069d055766" containerName="dnsmasq-dns" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.117843 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.138166 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85c9b98d8-kzhp5"] Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.270969 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-scripts\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.271064 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d747ae9b-00da-450d-a0cf-cd3a198cad72-logs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.271105 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-public-tls-certs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.271147 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-combined-ca-bundle\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.271256 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trxh2\" (UniqueName: \"kubernetes.io/projected/d747ae9b-00da-450d-a0cf-cd3a198cad72-kube-api-access-trxh2\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.271371 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-internal-tls-certs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.271443 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-config-data\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.290530 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8","Type":"ContainerStarted","Data":"1af44e7a4cbdd631c38b1f259f5eccd45c213b33e2b3d60e298a9444b970e01a"} Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.320735 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-hrh96" event={"ID":"21bfc307-8188-473c-8dc6-d24acb8f0694","Type":"ContainerStarted","Data":"d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f"} Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.321421 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.337787 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"10218fd0-afa5-4023-b0df-2f461de0260d","Type":"ContainerStarted","Data":"b3663a73adee62ab7233b4a9acade4e86048d5c9d33cef667aebd34e73dc1efe"} Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.352715 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56696ff475-hrh96" podStartSLOduration=4.352675537 podStartE2EDuration="4.352675537s" podCreationTimestamp="2026-03-13 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:44.347363042 +0000 UTC m=+1344.363445445" watchObservedRunningTime="2026-03-13 20:49:44.352675537 +0000 UTC m=+1344.368757940" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.376387 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-scripts\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.376443 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d747ae9b-00da-450d-a0cf-cd3a198cad72-logs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.376461 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-public-tls-certs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.376484 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-combined-ca-bundle\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.376539 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trxh2\" (UniqueName: \"kubernetes.io/projected/d747ae9b-00da-450d-a0cf-cd3a198cad72-kube-api-access-trxh2\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.376608 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-internal-tls-certs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.376641 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-config-data\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.381687 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d747ae9b-00da-450d-a0cf-cd3a198cad72-logs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.382411 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-combined-ca-bundle\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.387730 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-config-data\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.397867 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-public-tls-certs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.408292 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-internal-tls-certs\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.409356 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d747ae9b-00da-450d-a0cf-cd3a198cad72-scripts\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.412271 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trxh2\" (UniqueName: \"kubernetes.io/projected/d747ae9b-00da-450d-a0cf-cd3a198cad72-kube-api-access-trxh2\") pod \"placement-85c9b98d8-kzhp5\" (UID: \"d747ae9b-00da-450d-a0cf-cd3a198cad72\") " pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:44 crc kubenswrapper[5029]: I0313 20:49:44.542976 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.301182 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.368578 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"10218fd0-afa5-4023-b0df-2f461de0260d","Type":"ContainerStarted","Data":"c57edb92a5800b98dbc33bcb1a575d629ffa14eebd894efd3413e698572667f3"} Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.368772 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" containerName="manila-api-log" containerID="cri-o://b3663a73adee62ab7233b4a9acade4e86048d5c9d33cef667aebd34e73dc1efe" gracePeriod=30 Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.369143 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.369457 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" containerName="manila-api" containerID="cri-o://c57edb92a5800b98dbc33bcb1a575d629ffa14eebd894efd3413e698572667f3" gracePeriod=30 Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.402429 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.402382559 podStartE2EDuration="5.402382559s" podCreationTimestamp="2026-03-13 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:45.402339028 +0000 UTC m=+1345.418421441" watchObservedRunningTime="2026-03-13 20:49:45.402382559 +0000 UTC m=+1345.418464962" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.403454 5029 generic.go:334] "Generic (PLEG): container finished" podID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerID="43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290" exitCode=0 Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.403505 5029 generic.go:334] "Generic (PLEG): container finished" podID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerID="5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985" exitCode=0 Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.403598 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8422f550-8545-46cd-a310-a70b28a4f7cd","Type":"ContainerDied","Data":"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290"} Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.403638 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8422f550-8545-46cd-a310-a70b28a4f7cd","Type":"ContainerDied","Data":"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985"} Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.403650 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8422f550-8545-46cd-a310-a70b28a4f7cd","Type":"ContainerDied","Data":"97e92e4907de31c850bb05768163193f3e7ceed0ff2ec33fb1cd6a7fdb0dd301"} Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.403669 5029 scope.go:117] "RemoveContainer" containerID="43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.403886 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.415586 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-scripts\") pod \"8422f550-8545-46cd-a310-a70b28a4f7cd\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.417015 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-combined-ca-bundle\") pod \"8422f550-8545-46cd-a310-a70b28a4f7cd\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.417094 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data\") pod \"8422f550-8545-46cd-a310-a70b28a4f7cd\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.417166 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tp5l\" (UniqueName: \"kubernetes.io/projected/8422f550-8545-46cd-a310-a70b28a4f7cd-kube-api-access-5tp5l\") pod \"8422f550-8545-46cd-a310-a70b28a4f7cd\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.417196 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data-custom\") pod \"8422f550-8545-46cd-a310-a70b28a4f7cd\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.417273 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8422f550-8545-46cd-a310-a70b28a4f7cd-etc-machine-id\") pod \"8422f550-8545-46cd-a310-a70b28a4f7cd\" (UID: \"8422f550-8545-46cd-a310-a70b28a4f7cd\") " Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.418051 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8422f550-8545-46cd-a310-a70b28a4f7cd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8422f550-8545-46cd-a310-a70b28a4f7cd" (UID: "8422f550-8545-46cd-a310-a70b28a4f7cd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.418416 5029 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8422f550-8545-46cd-a310-a70b28a4f7cd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.427821 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-scripts" (OuterVolumeSpecName: "scripts") pod "8422f550-8545-46cd-a310-a70b28a4f7cd" (UID: "8422f550-8545-46cd-a310-a70b28a4f7cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.449280 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8422f550-8545-46cd-a310-a70b28a4f7cd" (UID: "8422f550-8545-46cd-a310-a70b28a4f7cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.449356 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8422f550-8545-46cd-a310-a70b28a4f7cd-kube-api-access-5tp5l" (OuterVolumeSpecName: "kube-api-access-5tp5l") pod "8422f550-8545-46cd-a310-a70b28a4f7cd" (UID: "8422f550-8545-46cd-a310-a70b28a4f7cd"). InnerVolumeSpecName "kube-api-access-5tp5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.452290 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8","Type":"ContainerStarted","Data":"2d2df07b6bd1464fe4775d4aeab6f5aa768daaa057845bbf1759af6f34ed81f1"} Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.455079 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6c6bfdcb-59kpl" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:56290->10.217.0.155:8443: read: connection reset by peer" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.491193 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.639107912 podStartE2EDuration="5.491178222s" podCreationTimestamp="2026-03-13 20:49:40 +0000 UTC" firstStartedPulling="2026-03-13 20:49:42.358467354 +0000 UTC m=+1342.374549757" lastFinishedPulling="2026-03-13 20:49:43.210537664 +0000 UTC m=+1343.226620067" observedRunningTime="2026-03-13 20:49:45.48524907 +0000 UTC m=+1345.501331493" watchObservedRunningTime="2026-03-13 20:49:45.491178222 +0000 UTC m=+1345.507260625" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.520160 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.520196 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tp5l\" (UniqueName: \"kubernetes.io/projected/8422f550-8545-46cd-a310-a70b28a4f7cd-kube-api-access-5tp5l\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.520208 5029 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.557838 5029 scope.go:117] "RemoveContainer" containerID="5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.626408 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8422f550-8545-46cd-a310-a70b28a4f7cd" (UID: "8422f550-8545-46cd-a310-a70b28a4f7cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.725919 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.737461 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85c9b98d8-kzhp5"] Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.774040 5029 scope.go:117] "RemoveContainer" containerID="43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290" Mar 13 20:49:45 crc kubenswrapper[5029]: E0313 20:49:45.775131 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290\": container with ID starting with 43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290 not found: ID does not exist" containerID="43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.775160 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290"} err="failed to get container status \"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290\": rpc error: code = NotFound desc = could not find container \"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290\": container with ID starting with 43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290 not found: ID does not exist" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.775182 5029 scope.go:117] "RemoveContainer" containerID="5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.786230 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data" (OuterVolumeSpecName: "config-data") pod "8422f550-8545-46cd-a310-a70b28a4f7cd" (UID: "8422f550-8545-46cd-a310-a70b28a4f7cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:45 crc kubenswrapper[5029]: E0313 20:49:45.787240 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985\": container with ID starting with 5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985 not found: ID does not exist" containerID="5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.787295 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985"} err="failed to get container status \"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985\": rpc error: code = NotFound desc = could not find container \"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985\": container with ID starting with 5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985 not found: ID does not exist" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.787335 5029 scope.go:117] "RemoveContainer" containerID="43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.788126 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290"} err="failed to get container status \"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290\": rpc error: code = NotFound desc = could not find container \"43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290\": container with ID starting with 43abf3318f0a1ba20711ae3af8904902fd96bffde96b8abf8189d5bfccc94290 not found: ID does not exist" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.788215 5029 scope.go:117] "RemoveContainer" containerID="5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.789280 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985"} err="failed to get container status \"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985\": rpc error: code = NotFound desc = could not find container \"5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985\": container with ID starting with 5bd83d73e3494ead277f891e82ae1d48639cad98c4d159cdca279081e79ac985 not found: ID does not exist" Mar 13 20:49:45 crc kubenswrapper[5029]: I0313 20:49:45.829374 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8422f550-8545-46cd-a310-a70b28a4f7cd-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.053145 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.083774 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.110663 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:46 crc kubenswrapper[5029]: E0313 20:49:46.111248 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerName="cinder-scheduler" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.111275 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerName="cinder-scheduler" Mar 13 20:49:46 crc kubenswrapper[5029]: E0313 20:49:46.111307 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerName="probe" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.111315 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerName="probe" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.111517 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerName="probe" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.111538 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" containerName="cinder-scheduler" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.112701 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.120191 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.145867 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.241172 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.241273 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.241300 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.241381 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.241423 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.241452 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgk84\" (UniqueName: \"kubernetes.io/projected/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-kube-api-access-vgk84\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.257632 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.284101 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.298367 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.348321 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.348402 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.348435 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgk84\" (UniqueName: \"kubernetes.io/projected/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-kube-api-access-vgk84\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.348647 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.348697 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.348714 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.351658 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.365879 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.365959 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.370534 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.390727 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.394651 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgk84\" (UniqueName: \"kubernetes.io/projected/eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54-kube-api-access-vgk84\") pod \"cinder-scheduler-0\" (UID: \"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54\") " pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.425105 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-567cdd7cd-vrz7b" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": read tcp 10.217.0.2:53524->10.217.0.168:9311: read: connection reset by peer" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.425255 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-567cdd7cd-vrz7b" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.168:9311/healthcheck\": read tcp 10.217.0.2:53518->10.217.0.168:9311: read: connection reset by peer" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.428080 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.439315 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.515965 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.522383 5029 generic.go:334] "Generic (PLEG): container finished" podID="10218fd0-afa5-4023-b0df-2f461de0260d" containerID="b3663a73adee62ab7233b4a9acade4e86048d5c9d33cef667aebd34e73dc1efe" exitCode=143 Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.523161 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"10218fd0-afa5-4023-b0df-2f461de0260d","Type":"ContainerDied","Data":"b3663a73adee62ab7233b4a9acade4e86048d5c9d33cef667aebd34e73dc1efe"} Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.528595 5029 generic.go:334] "Generic (PLEG): container finished" podID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerID="f6af4dac6417db6513b5e2602d7469ad832100f41291a81205b348b878058d2d" exitCode=0 Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.528683 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6c6bfdcb-59kpl" event={"ID":"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9","Type":"ContainerDied","Data":"f6af4dac6417db6513b5e2602d7469ad832100f41291a81205b348b878058d2d"} Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.538611 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c9b98d8-kzhp5" event={"ID":"d747ae9b-00da-450d-a0cf-cd3a198cad72","Type":"ContainerStarted","Data":"d571f5daf7bed82604c17c2edf220cb3c3df926d27a961e20be29cd64d29fb1b"} Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.538672 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c9b98d8-kzhp5" event={"ID":"d747ae9b-00da-450d-a0cf-cd3a198cad72","Type":"ContainerStarted","Data":"00b72eabaedde2c63f3db851eb0a914e6927f52a95f78f2e9e8d5faad6ee6130"} Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.539477 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="probe" containerID="cri-o://5f905e99a4c17d2476e82af9f863ab9541615ee5641a74951e99b6663fa6748a" gracePeriod=30 Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.539511 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="cinder-backup" containerID="cri-o://cf7175d96d34dd5e8cafd59ef50ce0774de6072c72186454e3019d15a1943450" gracePeriod=30 Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.539042 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="cinder-volume" containerID="cri-o://a77134a6664e8e03c5d65dd1fcf1c3d7de0906866a57ce5c2f0b4a504687d2d6" gracePeriod=30 Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.540032 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="probe" containerID="cri-o://44ceb93dd233613450e5b2dac027163dd76b768e2be8839aaf172525a1547292" gracePeriod=30 Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.627495 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8422f550-8545-46cd-a310-a70b28a4f7cd" path="/var/lib/kubelet/pods/8422f550-8545-46cd-a310-a70b28a4f7cd/volumes" Mar 13 20:49:46 crc kubenswrapper[5029]: I0313 20:49:46.880419 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bb76fc874-xq9l8" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.284095 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 13 20:49:47 crc kubenswrapper[5029]: W0313 20:49:47.306085 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaf1d8b8_6dfa_4a48_a32f_afa94adf5e54.slice/crio-68d8c3dc597f85c647d513a81b969948d8ef784314960bddb1193491160ea377 WatchSource:0}: Error finding container 68d8c3dc597f85c647d513a81b969948d8ef784314960bddb1193491160ea377: Status 404 returned error can't find the container with id 68d8c3dc597f85c647d513a81b969948d8ef784314960bddb1193491160ea377 Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.333224 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.384920 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-combined-ca-bundle\") pod \"0827f1c5-a1b0-435f-9649-695e40413d18\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.384967 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data-custom\") pod \"0827f1c5-a1b0-435f-9649-695e40413d18\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.385110 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data\") pod \"0827f1c5-a1b0-435f-9649-695e40413d18\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.385182 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0827f1c5-a1b0-435f-9649-695e40413d18-logs\") pod \"0827f1c5-a1b0-435f-9649-695e40413d18\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.385232 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm7db\" (UniqueName: \"kubernetes.io/projected/0827f1c5-a1b0-435f-9649-695e40413d18-kube-api-access-zm7db\") pod \"0827f1c5-a1b0-435f-9649-695e40413d18\" (UID: \"0827f1c5-a1b0-435f-9649-695e40413d18\") " Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.397240 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0827f1c5-a1b0-435f-9649-695e40413d18-logs" (OuterVolumeSpecName: "logs") pod "0827f1c5-a1b0-435f-9649-695e40413d18" (UID: "0827f1c5-a1b0-435f-9649-695e40413d18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.399140 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0827f1c5-a1b0-435f-9649-695e40413d18-kube-api-access-zm7db" (OuterVolumeSpecName: "kube-api-access-zm7db") pod "0827f1c5-a1b0-435f-9649-695e40413d18" (UID: "0827f1c5-a1b0-435f-9649-695e40413d18"). InnerVolumeSpecName "kube-api-access-zm7db". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.401046 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0827f1c5-a1b0-435f-9649-695e40413d18" (UID: "0827f1c5-a1b0-435f-9649-695e40413d18"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.443028 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0827f1c5-a1b0-435f-9649-695e40413d18" (UID: "0827f1c5-a1b0-435f-9649-695e40413d18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.486892 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data" (OuterVolumeSpecName: "config-data") pod "0827f1c5-a1b0-435f-9649-695e40413d18" (UID: "0827f1c5-a1b0-435f-9649-695e40413d18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.487825 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm7db\" (UniqueName: \"kubernetes.io/projected/0827f1c5-a1b0-435f-9649-695e40413d18-kube-api-access-zm7db\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.487868 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.487880 5029 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.487892 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0827f1c5-a1b0-435f-9649-695e40413d18-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.487905 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0827f1c5-a1b0-435f-9649-695e40413d18-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.602153 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c9b98d8-kzhp5" event={"ID":"d747ae9b-00da-450d-a0cf-cd3a198cad72","Type":"ContainerStarted","Data":"5065030567aa3c042bfbaef3800da9d5fdaec0d56b0977a2b4bb6972856b9300"} Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.603722 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.603759 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.632082 5029 generic.go:334] "Generic (PLEG): container finished" podID="0827f1c5-a1b0-435f-9649-695e40413d18" containerID="30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb" exitCode=0 Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.632170 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-567cdd7cd-vrz7b" event={"ID":"0827f1c5-a1b0-435f-9649-695e40413d18","Type":"ContainerDied","Data":"30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb"} Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.632206 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-567cdd7cd-vrz7b" event={"ID":"0827f1c5-a1b0-435f-9649-695e40413d18","Type":"ContainerDied","Data":"d95d356083a93b07d6d3d5d6965480332a44c6baddd19966ff3e764df2009879"} Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.632228 5029 scope.go:117] "RemoveContainer" containerID="30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.632439 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-567cdd7cd-vrz7b" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.654049 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85c9b98d8-kzhp5" podStartSLOduration=3.654028366 podStartE2EDuration="3.654028366s" podCreationTimestamp="2026-03-13 20:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:47.637968328 +0000 UTC m=+1347.654050731" watchObservedRunningTime="2026-03-13 20:49:47.654028366 +0000 UTC m=+1347.670110769" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.694286 5029 generic.go:334] "Generic (PLEG): container finished" podID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerID="5f905e99a4c17d2476e82af9f863ab9541615ee5641a74951e99b6663fa6748a" exitCode=0 Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.694325 5029 generic.go:334] "Generic (PLEG): container finished" podID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerID="a77134a6664e8e03c5d65dd1fcf1c3d7de0906866a57ce5c2f0b4a504687d2d6" exitCode=0 Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.694378 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"793360a3-2e62-4dfb-b69e-69ffd41f8ed1","Type":"ContainerDied","Data":"5f905e99a4c17d2476e82af9f863ab9541615ee5641a74951e99b6663fa6748a"} Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.694455 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"793360a3-2e62-4dfb-b69e-69ffd41f8ed1","Type":"ContainerDied","Data":"a77134a6664e8e03c5d65dd1fcf1c3d7de0906866a57ce5c2f0b4a504687d2d6"} Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.696654 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54","Type":"ContainerStarted","Data":"68d8c3dc597f85c647d513a81b969948d8ef784314960bddb1193491160ea377"} Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.704796 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-567cdd7cd-vrz7b"] Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.704935 5029 generic.go:334] "Generic (PLEG): container finished" podID="10218fd0-afa5-4023-b0df-2f461de0260d" containerID="c57edb92a5800b98dbc33bcb1a575d629ffa14eebd894efd3413e698572667f3" exitCode=0 Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.704985 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"10218fd0-afa5-4023-b0df-2f461de0260d","Type":"ContainerDied","Data":"c57edb92a5800b98dbc33bcb1a575d629ffa14eebd894efd3413e698572667f3"} Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.721878 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-567cdd7cd-vrz7b"] Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.733454 5029 scope.go:117] "RemoveContainer" containerID="59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.814162 5029 scope.go:117] "RemoveContainer" containerID="30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb" Mar 13 20:49:47 crc kubenswrapper[5029]: E0313 20:49:47.817284 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb\": container with ID starting with 30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb not found: ID does not exist" containerID="30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.817316 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb"} err="failed to get container status \"30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb\": rpc error: code = NotFound desc = could not find container \"30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb\": container with ID starting with 30a2608dbc7c2b6af775e790d79554c598240cd5db71a8421a0252ff98bd4ffb not found: ID does not exist" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.817336 5029 scope.go:117] "RemoveContainer" containerID="59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013" Mar 13 20:49:47 crc kubenswrapper[5029]: E0313 20:49:47.818405 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013\": container with ID starting with 59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013 not found: ID does not exist" containerID="59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013" Mar 13 20:49:47 crc kubenswrapper[5029]: I0313 20:49:47.818446 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013"} err="failed to get container status \"59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013\": rpc error: code = NotFound desc = could not find container \"59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013\": container with ID starting with 59b3dd3dea019cce546158377a01783f1d55cc6915c87dbf0417ec734bb95013 not found: ID does not exist" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.120277 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.313700 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-scripts\") pod \"10218fd0-afa5-4023-b0df-2f461de0260d\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.313804 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10218fd0-afa5-4023-b0df-2f461de0260d-logs\") pod \"10218fd0-afa5-4023-b0df-2f461de0260d\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.313929 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10218fd0-afa5-4023-b0df-2f461de0260d-etc-machine-id\") pod \"10218fd0-afa5-4023-b0df-2f461de0260d\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.313979 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data-custom\") pod \"10218fd0-afa5-4023-b0df-2f461de0260d\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.314012 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-combined-ca-bundle\") pod \"10218fd0-afa5-4023-b0df-2f461de0260d\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.314117 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data\") pod \"10218fd0-afa5-4023-b0df-2f461de0260d\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.314211 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p25z\" (UniqueName: \"kubernetes.io/projected/10218fd0-afa5-4023-b0df-2f461de0260d-kube-api-access-7p25z\") pod \"10218fd0-afa5-4023-b0df-2f461de0260d\" (UID: \"10218fd0-afa5-4023-b0df-2f461de0260d\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.315958 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10218fd0-afa5-4023-b0df-2f461de0260d-logs" (OuterVolumeSpecName: "logs") pod "10218fd0-afa5-4023-b0df-2f461de0260d" (UID: "10218fd0-afa5-4023-b0df-2f461de0260d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.316459 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.317656 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10218fd0-afa5-4023-b0df-2f461de0260d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "10218fd0-afa5-4023-b0df-2f461de0260d" (UID: "10218fd0-afa5-4023-b0df-2f461de0260d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.323645 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "10218fd0-afa5-4023-b0df-2f461de0260d" (UID: "10218fd0-afa5-4023-b0df-2f461de0260d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.327153 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-scripts" (OuterVolumeSpecName: "scripts") pod "10218fd0-afa5-4023-b0df-2f461de0260d" (UID: "10218fd0-afa5-4023-b0df-2f461de0260d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.339779 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10218fd0-afa5-4023-b0df-2f461de0260d-kube-api-access-7p25z" (OuterVolumeSpecName: "kube-api-access-7p25z") pod "10218fd0-afa5-4023-b0df-2f461de0260d" (UID: "10218fd0-afa5-4023-b0df-2f461de0260d"). InnerVolumeSpecName "kube-api-access-7p25z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.417631 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p25z\" (UniqueName: \"kubernetes.io/projected/10218fd0-afa5-4023-b0df-2f461de0260d-kube-api-access-7p25z\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.417667 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.417681 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10218fd0-afa5-4023-b0df-2f461de0260d-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.417691 5029 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10218fd0-afa5-4023-b0df-2f461de0260d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.417701 5029 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.418660 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data" (OuterVolumeSpecName: "config-data") pod "10218fd0-afa5-4023-b0df-2f461de0260d" (UID: "10218fd0-afa5-4023-b0df-2f461de0260d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.435955 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10218fd0-afa5-4023-b0df-2f461de0260d" (UID: "10218fd0-afa5-4023-b0df-2f461de0260d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525348 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-brick\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525449 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data-custom\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525544 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-iscsi\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525618 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-lib-cinder\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525620 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525644 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-run\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525668 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525688 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-nvme\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525706 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525723 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-machine-id\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525729 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-run" (OuterVolumeSpecName: "run") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525752 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525764 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvntl\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-kube-api-access-tvntl\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.525774 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.533776 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-kube-api-access-tvntl" (OuterVolumeSpecName: "kube-api-access-tvntl") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "kube-api-access-tvntl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.542230 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.545272 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-combined-ca-bundle\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.545357 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-cinder\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.545448 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-dev\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.545483 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-sys\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.545509 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-lib-modules\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.545567 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.545604 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-scripts\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.545651 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-ceph\") pod \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\" (UID: \"793360a3-2e62-4dfb-b69e-69ffd41f8ed1\") " Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546871 5029 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-brick\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546906 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546918 5029 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546932 5029 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-iscsi\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546943 5029 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546958 5029 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-run\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546970 5029 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-nvme\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546982 5029 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.546994 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvntl\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-kube-api-access-tvntl\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.547008 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10218fd0-afa5-4023-b0df-2f461de0260d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.548096 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-sys" (OuterVolumeSpecName: "sys") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.549073 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.557759 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-dev" (OuterVolumeSpecName: "dev") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.558639 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.565442 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-ceph" (OuterVolumeSpecName: "ceph") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.577145 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-scripts" (OuterVolumeSpecName: "scripts") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.622094 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" path="/var/lib/kubelet/pods/0827f1c5-a1b0-435f-9649-695e40413d18/volumes" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.651300 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.651337 5029 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-ceph\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.651346 5029 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.651359 5029 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-dev\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.651367 5029 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-sys\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.651375 5029 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-lib-modules\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.669840 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.726110 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.735048 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.754121 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.759829 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data" (OuterVolumeSpecName: "config-data") pod "793360a3-2e62-4dfb-b69e-69ffd41f8ed1" (UID: "793360a3-2e62-4dfb-b69e-69ffd41f8ed1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.761887 5029 generic.go:334] "Generic (PLEG): container finished" podID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerID="44ceb93dd233613450e5b2dac027163dd76b768e2be8839aaf172525a1547292" exitCode=0 Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.761921 5029 generic.go:334] "Generic (PLEG): container finished" podID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerID="cf7175d96d34dd5e8cafd59ef50ce0774de6072c72186454e3019d15a1943450" exitCode=0 Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.863800 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793360a3-2e62-4dfb-b69e-69ffd41f8ed1-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.894784 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"793360a3-2e62-4dfb-b69e-69ffd41f8ed1","Type":"ContainerDied","Data":"2acef9587905f19e7f16d029b1f33d67d474ac34252ec2d6c0c64c2653ec56df"} Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.894844 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"10218fd0-afa5-4023-b0df-2f461de0260d","Type":"ContainerDied","Data":"aa2f2e18087bf1794658165ec0ebcb436386288d145c4b5a3439922209b33a7d"} Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.894882 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"853d3135-6a6d-4d6c-a56e-1afe15771cdc","Type":"ContainerDied","Data":"44ceb93dd233613450e5b2dac027163dd76b768e2be8839aaf172525a1547292"} Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.894896 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"853d3135-6a6d-4d6c-a56e-1afe15771cdc","Type":"ContainerDied","Data":"cf7175d96d34dd5e8cafd59ef50ce0774de6072c72186454e3019d15a1943450"} Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.894905 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"853d3135-6a6d-4d6c-a56e-1afe15771cdc","Type":"ContainerDied","Data":"54a59851e7ca483e05dd3798543822970230a8bec65c8fcdc72bdb58f3179dc7"} Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.894915 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54a59851e7ca483e05dd3798543822970230a8bec65c8fcdc72bdb58f3179dc7" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.894933 5029 scope.go:117] "RemoveContainer" containerID="5f905e99a4c17d2476e82af9f863ab9541615ee5641a74951e99b6663fa6748a" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.943156 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.976440 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.991643 5029 scope.go:117] "RemoveContainer" containerID="a77134a6664e8e03c5d65dd1fcf1c3d7de0906866a57ce5c2f0b4a504687d2d6" Mar 13 20:49:48 crc kubenswrapper[5029]: I0313 20:49:48.992623 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.018916 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.020092 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020119 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api" Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.020141 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api-log" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020180 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api-log" Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.020197 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="cinder-volume" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020203 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="cinder-volume" Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.020249 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="cinder-backup" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020255 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="cinder-backup" Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.020274 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" containerName="manila-api" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020281 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" containerName="manila-api" Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.020289 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" containerName="manila-api-log" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020297 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" containerName="manila-api-log" Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.020321 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="probe" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020328 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="probe" Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.020336 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="probe" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020342 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="probe" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020800 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" containerName="manila-api-log" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020828 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="cinder-backup" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020839 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020884 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="probe" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020910 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" containerName="probe" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020921 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="0827f1c5-a1b0-435f-9649-695e40413d18" containerName="barbican-api-log" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020938 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" containerName="manila-api" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.020946 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" containerName="cinder-volume" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.022742 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.029179 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.029687 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.037577 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.067816 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.082391 5029 scope.go:117] "RemoveContainer" containerID="c57edb92a5800b98dbc33bcb1a575d629ffa14eebd894efd3413e698572667f3" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.134217 5029 scope.go:117] "RemoveContainer" containerID="b3663a73adee62ab7233b4a9acade4e86048d5c9d33cef667aebd34e73dc1efe" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.202908 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204415 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-cinder\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204492 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-iscsi\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204534 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-lib-cinder\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204558 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-run\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204619 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204635 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-brick\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204706 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data-custom\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204741 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-ceph\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204756 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-lib-modules\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204784 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-machine-id\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204876 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdnl5\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-kube-api-access-gdnl5\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204898 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-dev\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204938 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-combined-ca-bundle\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.204991 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-sys\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205030 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-nvme\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205061 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-scripts\") pod \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\" (UID: \"853d3135-6a6d-4d6c-a56e-1afe15771cdc\") " Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205647 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjdw\" (UniqueName: \"kubernetes.io/projected/6b8f9967-671e-49b9-8e28-15c9b460086e-kube-api-access-vwjdw\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205683 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b8f9967-671e-49b9-8e28-15c9b460086e-etc-machine-id\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205733 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f9967-671e-49b9-8e28-15c9b460086e-logs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205752 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-public-tls-certs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205782 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-config-data\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205825 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-scripts\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205880 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-config-data-custom\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.205946 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-internal-tls-certs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.206136 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-dev" (OuterVolumeSpecName: "dev") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.206188 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.206251 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.206284 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.206329 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-run" (OuterVolumeSpecName: "run") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.206628 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.207771 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.208128 5029 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.208150 5029 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-iscsi\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.208164 5029 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.208184 5029 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-run\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.208197 5029 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-var-locks-brick\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.208209 5029 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-dev\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.208946 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.209220 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-sys" (OuterVolumeSpecName: "sys") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.209378 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.210007 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.216692 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-scripts" (OuterVolumeSpecName: "scripts") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.247589 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-kube-api-access-gdnl5" (OuterVolumeSpecName: "kube-api-access-gdnl5") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "kube-api-access-gdnl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.256176 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.257574 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.258435 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-ceph" (OuterVolumeSpecName: "ceph") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.297118 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311140 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f9967-671e-49b9-8e28-15c9b460086e-logs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311208 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-public-tls-certs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311239 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-config-data\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311288 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-scripts\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311348 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-config-data-custom\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311415 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-internal-tls-certs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311511 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311559 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjdw\" (UniqueName: \"kubernetes.io/projected/6b8f9967-671e-49b9-8e28-15c9b460086e-kube-api-access-vwjdw\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311587 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b8f9967-671e-49b9-8e28-15c9b460086e-etc-machine-id\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311663 5029 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311683 5029 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-ceph\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311698 5029 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-lib-modules\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311711 5029 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311726 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdnl5\" (UniqueName: \"kubernetes.io/projected/853d3135-6a6d-4d6c-a56e-1afe15771cdc-kube-api-access-gdnl5\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311740 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311753 5029 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-sys\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311765 5029 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/853d3135-6a6d-4d6c-a56e-1afe15771cdc-etc-nvme\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311776 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.311834 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b8f9967-671e-49b9-8e28-15c9b460086e-etc-machine-id\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.313065 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f9967-671e-49b9-8e28-15c9b460086e-logs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.317542 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-public-tls-certs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.321703 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-config-data\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.322245 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-scripts\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.323685 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.324559 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-config-data-custom\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.324806 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f9967-671e-49b9-8e28-15c9b460086e-internal-tls-certs\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.339892 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.342052 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.345661 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.351241 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjdw\" (UniqueName: \"kubernetes.io/projected/6b8f9967-671e-49b9-8e28-15c9b460086e-kube-api-access-vwjdw\") pod \"manila-api-0\" (UID: \"6b8f9967-671e-49b9-8e28-15c9b460086e\") " pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.380426 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data" (OuterVolumeSpecName: "config-data") pod "853d3135-6a6d-4d6c-a56e-1afe15771cdc" (UID: "853d3135-6a6d-4d6c-a56e-1afe15771cdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.395982 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.401082 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.415643 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853d3135-6a6d-4d6c-a56e-1afe15771cdc-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.518904 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.518948 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b42k\" (UniqueName: \"kubernetes.io/projected/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-kube-api-access-4b42k\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.518996 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519015 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519042 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519082 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519117 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-run\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519136 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519160 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519184 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519227 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519250 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519271 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519291 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519313 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.519336 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: E0313 20:49:49.626619 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod793360a3_2e62_4dfb_b69e_69ffd41f8ed1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod793360a3_2e62_4dfb_b69e_69ffd41f8ed1.slice/crio-2acef9587905f19e7f16d029b1f33d67d474ac34252ec2d6c0c64c2653ec56df\": RecentStats: unable to find data in memory cache]" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.628612 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.628726 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.628785 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.628827 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.628902 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.628835 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629263 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629343 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629408 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629457 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b42k\" (UniqueName: \"kubernetes.io/projected/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-kube-api-access-4b42k\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629577 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629618 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629692 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629824 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629840 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629879 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-run\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629903 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629934 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.629964 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.630478 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.634730 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.634807 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.634834 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-run\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.634951 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.636007 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.636122 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.639819 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.645816 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.658216 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.661257 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b42k\" (UniqueName: \"kubernetes.io/projected/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-kube-api-access-4b42k\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.663663 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.664056 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e3adcc-0538-4137-a9c4-09fb34e79fe9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b5e3adcc-0538-4137-a9c4-09fb34e79fe9\") " pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.677764 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.826746 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54","Type":"ContainerStarted","Data":"8a42327d8787e307ab39ecdd4ea4d9e7a54e2962c2da926c1e738839c247ebce"} Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.844135 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.900653 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.900715 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.922984 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.926516 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.935930 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:49 crc kubenswrapper[5029]: I0313 20:49:49.945346 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044329 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-dev\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044408 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-run\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044435 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/85582134-3a4c-4127-8b04-5a0800fe403c-ceph\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044467 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-sys\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044488 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044512 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044525 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044546 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-lib-modules\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044573 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044597 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044612 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-scripts\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044634 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044695 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044735 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45vkr\" (UniqueName: \"kubernetes.io/projected/85582134-3a4c-4127-8b04-5a0800fe403c-kube-api-access-45vkr\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044754 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-config-data\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.044777 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146105 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146442 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-dev\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146496 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-run\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146516 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/85582134-3a4c-4127-8b04-5a0800fe403c-ceph\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146548 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-sys\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146573 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146595 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146613 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146634 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-lib-modules\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146661 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146687 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146703 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-scripts\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146723 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146758 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146786 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-config-data\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.146802 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45vkr\" (UniqueName: \"kubernetes.io/projected/85582134-3a4c-4127-8b04-5a0800fe403c-kube-api-access-45vkr\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.147212 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.147253 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-dev\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.147275 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-run\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.149013 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.149110 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-sys\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.149147 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.149201 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.151718 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/85582134-3a4c-4127-8b04-5a0800fe403c-ceph\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.151792 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.151922 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-lib-modules\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.160033 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/85582134-3a4c-4127-8b04-5a0800fe403c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.162746 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.173368 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.181729 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-scripts\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.184999 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85582134-3a4c-4127-8b04-5a0800fe403c-config-data\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.197997 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45vkr\" (UniqueName: \"kubernetes.io/projected/85582134-3a4c-4127-8b04-5a0800fe403c-kube-api-access-45vkr\") pod \"cinder-backup-0\" (UID: \"85582134-3a4c-4127-8b04-5a0800fe403c\") " pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.275428 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.288243 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.467697 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.642635 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10218fd0-afa5-4023-b0df-2f461de0260d" path="/var/lib/kubelet/pods/10218fd0-afa5-4023-b0df-2f461de0260d/volumes" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.645924 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793360a3-2e62-4dfb-b69e-69ffd41f8ed1" path="/var/lib/kubelet/pods/793360a3-2e62-4dfb-b69e-69ffd41f8ed1/volumes" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.647005 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853d3135-6a6d-4d6c-a56e-1afe15771cdc" path="/var/lib/kubelet/pods/853d3135-6a6d-4d6c-a56e-1afe15771cdc/volumes" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.647733 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.863830 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.870335 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b5e3adcc-0538-4137-a9c4-09fb34e79fe9","Type":"ContainerStarted","Data":"bc38c324a8e8028904c251fc1e0c04df8ac152414c1bbf87d29d33b4fe50f023"} Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.887342 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54","Type":"ContainerStarted","Data":"d576fb04b991027b4a564995510ed4899ae3747dc809774a34b8959cd69c8693"} Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.895699 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6b8f9967-671e-49b9-8e28-15c9b460086e","Type":"ContainerStarted","Data":"4a885163703bd9e5dcfd4fd7127bc94c4277cc04122129300b4e3c76e6b195fb"} Mar 13 20:49:50 crc kubenswrapper[5029]: I0313 20:49:50.919803 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.919776823 podStartE2EDuration="4.919776823s" podCreationTimestamp="2026-03-13 20:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:50.913717417 +0000 UTC m=+1350.929799840" watchObservedRunningTime="2026-03-13 20:49:50.919776823 +0000 UTC m=+1350.935859226" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.006766 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.081100 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7ht2z"] Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.081606 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" podUID="6e726c0a-09e0-46c4-870f-440581c3af6e" containerName="dnsmasq-dns" containerID="cri-o://ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742" gracePeriod=10 Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.442986 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.487803 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.489333 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.497297 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jhqvv" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.497513 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.497670 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.529841 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.533812 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.533901 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.533938 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config-secret\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.533982 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jnq\" (UniqueName: \"kubernetes.io/projected/66ba7b5f-cd93-42d3-a7db-d5391d40523e-kube-api-access-r9jnq\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.639050 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9jnq\" (UniqueName: \"kubernetes.io/projected/66ba7b5f-cd93-42d3-a7db-d5391d40523e-kube-api-access-r9jnq\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.639539 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.639578 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.639600 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config-secret\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.640746 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.650585 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config-secret\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.651473 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.667604 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9jnq\" (UniqueName: \"kubernetes.io/projected/66ba7b5f-cd93-42d3-a7db-d5391d40523e-kube-api-access-r9jnq\") pod \"openstackclient\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.923040 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.924445 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.925915 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.926164 5029 generic.go:334] "Generic (PLEG): container finished" podID="6e726c0a-09e0-46c4-870f-440581c3af6e" containerID="ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742" exitCode=0 Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.926242 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" event={"ID":"6e726c0a-09e0-46c4-870f-440581c3af6e","Type":"ContainerDied","Data":"ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742"} Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.926276 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" event={"ID":"6e726c0a-09e0-46c4-870f-440581c3af6e","Type":"ContainerDied","Data":"53da69bc56b52352fd5e0381a0ee56ecdd3d74227d1adf560a164f1fa0099543"} Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.926389 5029 scope.go:117] "RemoveContainer" containerID="ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742" Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.951227 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6b8f9967-671e-49b9-8e28-15c9b460086e","Type":"ContainerStarted","Data":"d74aa8fd5f9ec2e5e75e94e1775ec2c12715c654930244e0b66d5abd786f76a8"} Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.951295 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.963467 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"85582134-3a4c-4127-8b04-5a0800fe403c","Type":"ContainerStarted","Data":"7e17ef1d3748285cf619ea045b8d0a3bf67702cc63ba801c47aa887754ecd174"} Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.963517 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"85582134-3a4c-4127-8b04-5a0800fe403c","Type":"ContainerStarted","Data":"ba5ac7c1058716f726ec39e19c38f3a36f104d89a6f163004b756c973dfe8e39"} Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.963529 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"85582134-3a4c-4127-8b04-5a0800fe403c","Type":"ContainerStarted","Data":"e64e0fcb03ef5e5edd53fa0ed3bdfba7e1d80d3eabf3fe75244ee1fd0a061b52"} Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.972221 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b5e3adcc-0538-4137-a9c4-09fb34e79fe9","Type":"ContainerStarted","Data":"fef28f899c1f119ba4006360dbe47b3be57178c490e2cbda5aca28483c7e2b39"} Mar 13 20:49:51 crc kubenswrapper[5029]: I0313 20:49:51.972261 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b5e3adcc-0538-4137-a9c4-09fb34e79fe9","Type":"ContainerStarted","Data":"7ab50c76f70dc58be5f4cab8cd447cb53ba1c29c92a0d35e263f87cdd3fbf34b"} Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.054728 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-config\") pod \"6e726c0a-09e0-46c4-870f-440581c3af6e\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.055181 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-svc\") pod \"6e726c0a-09e0-46c4-870f-440581c3af6e\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.055390 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hf4z\" (UniqueName: \"kubernetes.io/projected/6e726c0a-09e0-46c4-870f-440581c3af6e-kube-api-access-8hf4z\") pod \"6e726c0a-09e0-46c4-870f-440581c3af6e\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.055538 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-nb\") pod \"6e726c0a-09e0-46c4-870f-440581c3af6e\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.055645 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-sb\") pod \"6e726c0a-09e0-46c4-870f-440581c3af6e\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.055759 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-swift-storage-0\") pod \"6e726c0a-09e0-46c4-870f-440581c3af6e\" (UID: \"6e726c0a-09e0-46c4-870f-440581c3af6e\") " Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.056614 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 13 20:49:52 crc kubenswrapper[5029]: E0313 20:49:52.057054 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e726c0a-09e0-46c4-870f-440581c3af6e" containerName="init" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.057066 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e726c0a-09e0-46c4-870f-440581c3af6e" containerName="init" Mar 13 20:49:52 crc kubenswrapper[5029]: E0313 20:49:52.057080 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e726c0a-09e0-46c4-870f-440581c3af6e" containerName="dnsmasq-dns" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.057086 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e726c0a-09e0-46c4-870f-440581c3af6e" containerName="dnsmasq-dns" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.057252 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e726c0a-09e0-46c4-870f-440581c3af6e" containerName="dnsmasq-dns" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.059170 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.070666 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.070645084 podStartE2EDuration="3.070645084s" podCreationTimestamp="2026-03-13 20:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:52.002467784 +0000 UTC m=+1352.018550187" watchObservedRunningTime="2026-03-13 20:49:52.070645084 +0000 UTC m=+1352.086727487" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.076928 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e726c0a-09e0-46c4-870f-440581c3af6e-kube-api-access-8hf4z" (OuterVolumeSpecName: "kube-api-access-8hf4z") pod "6e726c0a-09e0-46c4-870f-440581c3af6e" (UID: "6e726c0a-09e0-46c4-870f-440581c3af6e"). InnerVolumeSpecName "kube-api-access-8hf4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.137896 5029 scope.go:117] "RemoveContainer" containerID="1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.154552 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6e726c0a-09e0-46c4-870f-440581c3af6e" (UID: "6e726c0a-09e0-46c4-870f-440581c3af6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.167481 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnznz\" (UniqueName: \"kubernetes.io/projected/fa553312-0146-41c1-bc2e-9147af234ac8-kube-api-access-hnznz\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.167534 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa553312-0146-41c1-bc2e-9147af234ac8-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.167573 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa553312-0146-41c1-bc2e-9147af234ac8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.167602 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa553312-0146-41c1-bc2e-9147af234ac8-openstack-config\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.167659 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hf4z\" (UniqueName: \"kubernetes.io/projected/6e726c0a-09e0-46c4-870f-440581c3af6e-kube-api-access-8hf4z\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.167669 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.204301 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6e726c0a-09e0-46c4-870f-440581c3af6e" (UID: "6e726c0a-09e0-46c4-870f-440581c3af6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.212099 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.221576 5029 scope.go:117] "RemoveContainer" containerID="ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.225431 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6e726c0a-09e0-46c4-870f-440581c3af6e" (UID: "6e726c0a-09e0-46c4-870f-440581c3af6e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:52 crc kubenswrapper[5029]: E0313 20:49:52.225951 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742\": container with ID starting with ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742 not found: ID does not exist" containerID="ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.226020 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742"} err="failed to get container status \"ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742\": rpc error: code = NotFound desc = could not find container \"ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742\": container with ID starting with ff6e29f91e9d4d1217ae44a8a5bef2a13560676f37216fa895790923db754742 not found: ID does not exist" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.226058 5029 scope.go:117] "RemoveContainer" containerID="1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa" Mar 13 20:49:52 crc kubenswrapper[5029]: E0313 20:49:52.226472 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa\": container with ID starting with 1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa not found: ID does not exist" containerID="1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.226515 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa"} err="failed to get container status \"1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa\": rpc error: code = NotFound desc = could not find container \"1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa\": container with ID starting with 1f29ac1a98dae3c5f412804ee7c68c7b19777d16a0eccbffe0353bf0cf734daa not found: ID does not exist" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.238684 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.2386609379999998 podStartE2EDuration="3.238660938s" podCreationTimestamp="2026-03-13 20:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:52.03746638 +0000 UTC m=+1352.053548793" watchObservedRunningTime="2026-03-13 20:49:52.238660938 +0000 UTC m=+1352.254743341" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.245039 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-config" (OuterVolumeSpecName: "config") pod "6e726c0a-09e0-46c4-870f-440581c3af6e" (UID: "6e726c0a-09e0-46c4-870f-440581c3af6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.259015 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e726c0a-09e0-46c4-870f-440581c3af6e" (UID: "6e726c0a-09e0-46c4-870f-440581c3af6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.272156 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnznz\" (UniqueName: \"kubernetes.io/projected/fa553312-0146-41c1-bc2e-9147af234ac8-kube-api-access-hnznz\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.272223 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa553312-0146-41c1-bc2e-9147af234ac8-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.272285 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa553312-0146-41c1-bc2e-9147af234ac8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.272321 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa553312-0146-41c1-bc2e-9147af234ac8-openstack-config\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.272539 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.272552 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.272565 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.272576 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e726c0a-09e0-46c4-870f-440581c3af6e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.275536 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa553312-0146-41c1-bc2e-9147af234ac8-openstack-config\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.296346 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa553312-0146-41c1-bc2e-9147af234ac8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.304767 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnznz\" (UniqueName: \"kubernetes.io/projected/fa553312-0146-41c1-bc2e-9147af234ac8-kube-api-access-hnznz\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: E0313 20:49:52.313052 5029 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 20:49:52 crc kubenswrapper[5029]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_66ba7b5f-cd93-42d3-a7db-d5391d40523e_0(5494d1133eed2c78d5a3106edb6c0ea742ff572b4c8ab545701a0ec40f9264ea): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5494d1133eed2c78d5a3106edb6c0ea742ff572b4c8ab545701a0ec40f9264ea" Netns:"/var/run/netns/b93b531f-de05-449d-9d16-7368be987957" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5494d1133eed2c78d5a3106edb6c0ea742ff572b4c8ab545701a0ec40f9264ea;K8S_POD_UID=66ba7b5f-cd93-42d3-a7db-d5391d40523e" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/66ba7b5f-cd93-42d3-a7db-d5391d40523e]: expected pod UID "66ba7b5f-cd93-42d3-a7db-d5391d40523e" but got "fa553312-0146-41c1-bc2e-9147af234ac8" from Kube API Mar 13 20:49:52 crc kubenswrapper[5029]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 20:49:52 crc kubenswrapper[5029]: > Mar 13 20:49:52 crc kubenswrapper[5029]: E0313 20:49:52.313154 5029 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 20:49:52 crc kubenswrapper[5029]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_66ba7b5f-cd93-42d3-a7db-d5391d40523e_0(5494d1133eed2c78d5a3106edb6c0ea742ff572b4c8ab545701a0ec40f9264ea): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5494d1133eed2c78d5a3106edb6c0ea742ff572b4c8ab545701a0ec40f9264ea" Netns:"/var/run/netns/b93b531f-de05-449d-9d16-7368be987957" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5494d1133eed2c78d5a3106edb6c0ea742ff572b4c8ab545701a0ec40f9264ea;K8S_POD_UID=66ba7b5f-cd93-42d3-a7db-d5391d40523e" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/66ba7b5f-cd93-42d3-a7db-d5391d40523e]: expected pod UID "66ba7b5f-cd93-42d3-a7db-d5391d40523e" but got "fa553312-0146-41c1-bc2e-9147af234ac8" from Kube API Mar 13 20:49:52 crc kubenswrapper[5029]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 20:49:52 crc kubenswrapper[5029]: > pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.319853 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa553312-0146-41c1-bc2e-9147af234ac8-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa553312-0146-41c1-bc2e-9147af234ac8\") " pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.410297 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 20:49:52 crc kubenswrapper[5029]: I0313 20:49:52.993151 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:52.998423 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7ht2z" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.004195 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6b8f9967-671e-49b9-8e28-15c9b460086e","Type":"ContainerStarted","Data":"c5475d903775a67724f601a51c063e1c2e4c250749f822f13f7e0c508cc4f514"} Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.005328 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.005633 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.030383 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.030356891 podStartE2EDuration="5.030356891s" podCreationTimestamp="2026-03-13 20:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:49:53.025532819 +0000 UTC m=+1353.041615222" watchObservedRunningTime="2026-03-13 20:49:53.030356891 +0000 UTC m=+1353.046439294" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.102601 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.110703 5029 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="66ba7b5f-cd93-42d3-a7db-d5391d40523e" podUID="fa553312-0146-41c1-bc2e-9147af234ac8" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.113878 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7ht2z"] Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.132304 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7ht2z"] Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.196973 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config-secret\") pod \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.199058 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9jnq\" (UniqueName: \"kubernetes.io/projected/66ba7b5f-cd93-42d3-a7db-d5391d40523e-kube-api-access-r9jnq\") pod \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.199113 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config\") pod \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.199142 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-combined-ca-bundle\") pod \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\" (UID: \"66ba7b5f-cd93-42d3-a7db-d5391d40523e\") " Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.199819 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "66ba7b5f-cd93-42d3-a7db-d5391d40523e" (UID: "66ba7b5f-cd93-42d3-a7db-d5391d40523e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.200552 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.214012 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66ba7b5f-cd93-42d3-a7db-d5391d40523e" (UID: "66ba7b5f-cd93-42d3-a7db-d5391d40523e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.216109 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ba7b5f-cd93-42d3-a7db-d5391d40523e-kube-api-access-r9jnq" (OuterVolumeSpecName: "kube-api-access-r9jnq") pod "66ba7b5f-cd93-42d3-a7db-d5391d40523e" (UID: "66ba7b5f-cd93-42d3-a7db-d5391d40523e"). InnerVolumeSpecName "kube-api-access-r9jnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.232250 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "66ba7b5f-cd93-42d3-a7db-d5391d40523e" (UID: "66ba7b5f-cd93-42d3-a7db-d5391d40523e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.303786 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9jnq\" (UniqueName: \"kubernetes.io/projected/66ba7b5f-cd93-42d3-a7db-d5391d40523e-kube-api-access-r9jnq\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.303831 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:53 crc kubenswrapper[5029]: I0313 20:49:53.303845 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66ba7b5f-cd93-42d3-a7db-d5391d40523e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 13 20:49:54 crc kubenswrapper[5029]: I0313 20:49:54.022600 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fa553312-0146-41c1-bc2e-9147af234ac8","Type":"ContainerStarted","Data":"6b23a2873d00bc5a739655c86878fc546deea533241662caf26efd25eb185ffc"} Mar 13 20:49:54 crc kubenswrapper[5029]: I0313 20:49:54.022684 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 20:49:54 crc kubenswrapper[5029]: I0313 20:49:54.036982 5029 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="66ba7b5f-cd93-42d3-a7db-d5391d40523e" podUID="fa553312-0146-41c1-bc2e-9147af234ac8" Mar 13 20:49:54 crc kubenswrapper[5029]: I0313 20:49:54.565968 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6c6bfdcb-59kpl" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Mar 13 20:49:54 crc kubenswrapper[5029]: I0313 20:49:54.618453 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ba7b5f-cd93-42d3-a7db-d5391d40523e" path="/var/lib/kubelet/pods/66ba7b5f-cd93-42d3-a7db-d5391d40523e/volumes" Mar 13 20:49:54 crc kubenswrapper[5029]: I0313 20:49:54.620766 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e726c0a-09e0-46c4-870f-440581c3af6e" path="/var/lib/kubelet/pods/6e726c0a-09e0-46c4-870f-440581c3af6e/volumes" Mar 13 20:49:54 crc kubenswrapper[5029]: I0313 20:49:54.679294 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.276433 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.891637 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-94bcffbb7-lqxc5"] Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.893477 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.896589 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.896844 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.897054 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.906821 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-94bcffbb7-lqxc5"] Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.974449 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-public-tls-certs\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.974549 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-config-data\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.974620 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d145e01e-08f4-42f3-b239-86e0abcb2ec1-etc-swift\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.974663 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-combined-ca-bundle\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.974761 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-internal-tls-certs\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.974790 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d145e01e-08f4-42f3-b239-86e0abcb2ec1-log-httpd\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.974832 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcj4\" (UniqueName: \"kubernetes.io/projected/d145e01e-08f4-42f3-b239-86e0abcb2ec1-kube-api-access-2mcj4\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:55 crc kubenswrapper[5029]: I0313 20:49:55.974907 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d145e01e-08f4-42f3-b239-86e0abcb2ec1-run-httpd\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.076596 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d145e01e-08f4-42f3-b239-86e0abcb2ec1-log-httpd\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.076702 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mcj4\" (UniqueName: \"kubernetes.io/projected/d145e01e-08f4-42f3-b239-86e0abcb2ec1-kube-api-access-2mcj4\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.076737 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d145e01e-08f4-42f3-b239-86e0abcb2ec1-run-httpd\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.076799 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-public-tls-certs\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.076885 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-config-data\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.076954 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d145e01e-08f4-42f3-b239-86e0abcb2ec1-etc-swift\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.076989 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-combined-ca-bundle\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.077093 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-internal-tls-certs\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.077567 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d145e01e-08f4-42f3-b239-86e0abcb2ec1-log-httpd\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.080440 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d145e01e-08f4-42f3-b239-86e0abcb2ec1-run-httpd\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.087383 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-config-data\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.087990 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-internal-tls-certs\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.097259 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d145e01e-08f4-42f3-b239-86e0abcb2ec1-etc-swift\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.098653 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-combined-ca-bundle\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.107721 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mcj4\" (UniqueName: \"kubernetes.io/projected/d145e01e-08f4-42f3-b239-86e0abcb2ec1-kube-api-access-2mcj4\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.113304 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d145e01e-08f4-42f3-b239-86e0abcb2ec1-public-tls-certs\") pod \"swift-proxy-94bcffbb7-lqxc5\" (UID: \"d145e01e-08f4-42f3-b239-86e0abcb2ec1\") " pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.228885 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:49:56 crc kubenswrapper[5029]: I0313 20:49:56.725372 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 13 20:49:57 crc kubenswrapper[5029]: I0313 20:49:57.705356 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:49:57 crc kubenswrapper[5029]: I0313 20:49:57.706387 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="ceilometer-central-agent" containerID="cri-o://4345c598e15dc4d1cc25b892f3080074163d3b543b379f0e51f46a057110cf45" gracePeriod=30 Mar 13 20:49:57 crc kubenswrapper[5029]: I0313 20:49:57.707468 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="proxy-httpd" containerID="cri-o://1a8f30b2ad28824fb0ca56e6be9a0dc1f59afacd4b64b809aa301a484a6476fc" gracePeriod=30 Mar 13 20:49:57 crc kubenswrapper[5029]: I0313 20:49:57.707594 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="ceilometer-notification-agent" containerID="cri-o://fff6849dd1ebc77e403bc05cdb405ff112a79886b97960311b8416316fd3da19" gracePeriod=30 Mar 13 20:49:57 crc kubenswrapper[5029]: I0313 20:49:57.707705 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="sg-core" containerID="cri-o://7deed472b980644a32d4a72d08df592db249ce7d8e5af3ce0022018125b1fcb7" gracePeriod=30 Mar 13 20:49:57 crc kubenswrapper[5029]: I0313 20:49:57.723132 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.171:3000/\": EOF" Mar 13 20:49:58 crc kubenswrapper[5029]: I0313 20:49:58.087008 5029 generic.go:334] "Generic (PLEG): container finished" podID="d1917286-7b0a-46c8-a296-fab758373bc5" containerID="1a8f30b2ad28824fb0ca56e6be9a0dc1f59afacd4b64b809aa301a484a6476fc" exitCode=0 Mar 13 20:49:58 crc kubenswrapper[5029]: I0313 20:49:58.087055 5029 generic.go:334] "Generic (PLEG): container finished" podID="d1917286-7b0a-46c8-a296-fab758373bc5" containerID="7deed472b980644a32d4a72d08df592db249ce7d8e5af3ce0022018125b1fcb7" exitCode=2 Mar 13 20:49:58 crc kubenswrapper[5029]: I0313 20:49:58.087079 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerDied","Data":"1a8f30b2ad28824fb0ca56e6be9a0dc1f59afacd4b64b809aa301a484a6476fc"} Mar 13 20:49:58 crc kubenswrapper[5029]: I0313 20:49:58.087110 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerDied","Data":"7deed472b980644a32d4a72d08df592db249ce7d8e5af3ce0022018125b1fcb7"} Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.111182 5029 generic.go:334] "Generic (PLEG): container finished" podID="d1917286-7b0a-46c8-a296-fab758373bc5" containerID="fff6849dd1ebc77e403bc05cdb405ff112a79886b97960311b8416316fd3da19" exitCode=0 Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.111214 5029 generic.go:334] "Generic (PLEG): container finished" podID="d1917286-7b0a-46c8-a296-fab758373bc5" containerID="4345c598e15dc4d1cc25b892f3080074163d3b543b379f0e51f46a057110cf45" exitCode=0 Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.111238 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerDied","Data":"fff6849dd1ebc77e403bc05cdb405ff112a79886b97960311b8416316fd3da19"} Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.111265 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerDied","Data":"4345c598e15dc4d1cc25b892f3080074163d3b543b379f0e51f46a057110cf45"} Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.771048 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d875c8b5-6tdfp" Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.890344 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cf8f459d4-bj2jk"] Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.890772 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cf8f459d4-bj2jk" podUID="da8a5250-75de-4986-ab96-2415b667cac1" containerName="neutron-api" containerID="cri-o://3c0571ae25d9f6ddcd432dbf2e81e8055f5c32434c78aea3186d589972cd419a" gracePeriod=30 Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.891153 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cf8f459d4-bj2jk" podUID="da8a5250-75de-4986-ab96-2415b667cac1" containerName="neutron-httpd" containerID="cri-o://ce6e450fa61563912ee229fa5e741c1f13eb1c053664e5a8dcb5162c835a2236" gracePeriod=30 Mar 13 20:49:59 crc kubenswrapper[5029]: I0313 20:49:59.909967 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-94bcffbb7-lqxc5"] Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.034977 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.156535 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9813a0d8-78d8-41ea-a5af-b57454a8e0a0","Type":"ContainerStarted","Data":"8979b2347ffdf7edc624639bd2f223ea6a2978f06c16c94d7c5946ea3f413825"} Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.178151 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557250-jwpms"] Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.180006 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557250-jwpms" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.184147 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.184396 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.184582 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.187760 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557250-jwpms"] Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.308887 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvps9\" (UniqueName: \"kubernetes.io/projected/274b7405-641b-4d9c-90b6-7bc8d511d5ea-kube-api-access-cvps9\") pod \"auto-csr-approver-29557250-jwpms\" (UID: \"274b7405-641b-4d9c-90b6-7bc8d511d5ea\") " pod="openshift-infra/auto-csr-approver-29557250-jwpms" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.412358 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvps9\" (UniqueName: \"kubernetes.io/projected/274b7405-641b-4d9c-90b6-7bc8d511d5ea-kube-api-access-cvps9\") pod \"auto-csr-approver-29557250-jwpms\" (UID: \"274b7405-641b-4d9c-90b6-7bc8d511d5ea\") " pod="openshift-infra/auto-csr-approver-29557250-jwpms" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.438070 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvps9\" (UniqueName: \"kubernetes.io/projected/274b7405-641b-4d9c-90b6-7bc8d511d5ea-kube-api-access-cvps9\") pod \"auto-csr-approver-29557250-jwpms\" (UID: \"274b7405-641b-4d9c-90b6-7bc8d511d5ea\") " pod="openshift-infra/auto-csr-approver-29557250-jwpms" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.535648 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557250-jwpms" Mar 13 20:50:00 crc kubenswrapper[5029]: I0313 20:50:00.590331 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Mar 13 20:50:01 crc kubenswrapper[5029]: I0313 20:50:01.173266 5029 generic.go:334] "Generic (PLEG): container finished" podID="da8a5250-75de-4986-ab96-2415b667cac1" containerID="ce6e450fa61563912ee229fa5e741c1f13eb1c053664e5a8dcb5162c835a2236" exitCode=0 Mar 13 20:50:01 crc kubenswrapper[5029]: I0313 20:50:01.173319 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf8f459d4-bj2jk" event={"ID":"da8a5250-75de-4986-ab96-2415b667cac1","Type":"ContainerDied","Data":"ce6e450fa61563912ee229fa5e741c1f13eb1c053664e5a8dcb5162c835a2236"} Mar 13 20:50:01 crc kubenswrapper[5029]: I0313 20:50:01.949983 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:50:01 crc kubenswrapper[5029]: I0313 20:50:01.950564 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:50:01 crc kubenswrapper[5029]: I0313 20:50:01.950632 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:50:01 crc kubenswrapper[5029]: I0313 20:50:01.954010 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc08a3f0bf62f626b96edf0adf5dbb9a0493ba7c49c9be50ad8bce4dd83f3787"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 20:50:01 crc kubenswrapper[5029]: I0313 20:50:01.954092 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://fc08a3f0bf62f626b96edf0adf5dbb9a0493ba7c49c9be50ad8bce4dd83f3787" gracePeriod=600 Mar 13 20:50:02 crc kubenswrapper[5029]: I0313 20:50:02.191227 5029 generic.go:334] "Generic (PLEG): container finished" podID="da8a5250-75de-4986-ab96-2415b667cac1" containerID="3c0571ae25d9f6ddcd432dbf2e81e8055f5c32434c78aea3186d589972cd419a" exitCode=0 Mar 13 20:50:02 crc kubenswrapper[5029]: I0313 20:50:02.191297 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf8f459d4-bj2jk" event={"ID":"da8a5250-75de-4986-ab96-2415b667cac1","Type":"ContainerDied","Data":"3c0571ae25d9f6ddcd432dbf2e81e8055f5c32434c78aea3186d589972cd419a"} Mar 13 20:50:02 crc kubenswrapper[5029]: I0313 20:50:02.194559 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="fc08a3f0bf62f626b96edf0adf5dbb9a0493ba7c49c9be50ad8bce4dd83f3787" exitCode=0 Mar 13 20:50:02 crc kubenswrapper[5029]: I0313 20:50:02.194594 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"fc08a3f0bf62f626b96edf0adf5dbb9a0493ba7c49c9be50ad8bce4dd83f3787"} Mar 13 20:50:02 crc kubenswrapper[5029]: I0313 20:50:02.194624 5029 scope.go:117] "RemoveContainer" containerID="098cf3f8300a8686d628684223c880e3efcc22b58099225528ac37cb2f271026" Mar 13 20:50:02 crc kubenswrapper[5029]: I0313 20:50:02.780045 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Mar 13 20:50:02 crc kubenswrapper[5029]: I0313 20:50:02.825340 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:50:03 crc kubenswrapper[5029]: I0313 20:50:03.210352 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerName="manila-scheduler" containerID="cri-o://1af44e7a4cbdd631c38b1f259f5eccd45c213b33e2b3d60e298a9444b970e01a" gracePeriod=30 Mar 13 20:50:03 crc kubenswrapper[5029]: I0313 20:50:03.210794 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerName="probe" containerID="cri-o://2d2df07b6bd1464fe4775d4aeab6f5aa768daaa057845bbf1759af6f34ed81f1" gracePeriod=30 Mar 13 20:50:04 crc kubenswrapper[5029]: I0313 20:50:04.224718 5029 generic.go:334] "Generic (PLEG): container finished" podID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerID="2d2df07b6bd1464fe4775d4aeab6f5aa768daaa057845bbf1759af6f34ed81f1" exitCode=0 Mar 13 20:50:04 crc kubenswrapper[5029]: I0313 20:50:04.224776 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8","Type":"ContainerDied","Data":"2d2df07b6bd1464fe4775d4aeab6f5aa768daaa057845bbf1759af6f34ed81f1"} Mar 13 20:50:04 crc kubenswrapper[5029]: I0313 20:50:04.566078 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6c6bfdcb-59kpl" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Mar 13 20:50:04 crc kubenswrapper[5029]: I0313 20:50:04.566286 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:50:05 crc kubenswrapper[5029]: I0313 20:50:05.238679 5029 generic.go:334] "Generic (PLEG): container finished" podID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerID="1af44e7a4cbdd631c38b1f259f5eccd45c213b33e2b3d60e298a9444b970e01a" exitCode=0 Mar 13 20:50:05 crc kubenswrapper[5029]: I0313 20:50:05.238756 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8","Type":"ContainerDied","Data":"1af44e7a4cbdd631c38b1f259f5eccd45c213b33e2b3d60e298a9444b970e01a"} Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.256681 5029 generic.go:334] "Generic (PLEG): container finished" podID="16129875-de71-41c7-8c75-17a279ded4b3" containerID="8e3b1a9c865bfb968583822bac1c553c7c415163692c70679545117119809d02" exitCode=137 Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.256741 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16129875-de71-41c7-8c75-17a279ded4b3","Type":"ContainerDied","Data":"8e3b1a9c865bfb968583822bac1c553c7c415163692c70679545117119809d02"} Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.618167 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.176:8776/healthcheck\": dial tcp 10.217.0.176:8776: connect: connection refused" Mar 13 20:50:06 crc kubenswrapper[5029]: W0313 20:50:06.645225 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd145e01e_08f4_42f3_b239_86e0abcb2ec1.slice/crio-0776e6664fc84ffc34b0e0277048459769b4d93520326de4143337e549ed4fc3 WatchSource:0}: Error finding container 0776e6664fc84ffc34b0e0277048459769b4d93520326de4143337e549ed4fc3: Status 404 returned error can't find the container with id 0776e6664fc84ffc34b0e0277048459769b4d93520326de4143337e549ed4fc3 Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.936168 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.997793 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-run-httpd\") pod \"d1917286-7b0a-46c8-a296-fab758373bc5\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.997899 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-combined-ca-bundle\") pod \"d1917286-7b0a-46c8-a296-fab758373bc5\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.997977 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-sg-core-conf-yaml\") pod \"d1917286-7b0a-46c8-a296-fab758373bc5\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.998084 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-log-httpd\") pod \"d1917286-7b0a-46c8-a296-fab758373bc5\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.998167 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lvdh\" (UniqueName: \"kubernetes.io/projected/d1917286-7b0a-46c8-a296-fab758373bc5-kube-api-access-4lvdh\") pod \"d1917286-7b0a-46c8-a296-fab758373bc5\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.998199 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-config-data\") pod \"d1917286-7b0a-46c8-a296-fab758373bc5\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.998231 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-scripts\") pod \"d1917286-7b0a-46c8-a296-fab758373bc5\" (UID: \"d1917286-7b0a-46c8-a296-fab758373bc5\") " Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.999044 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1917286-7b0a-46c8-a296-fab758373bc5" (UID: "d1917286-7b0a-46c8-a296-fab758373bc5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:06 crc kubenswrapper[5029]: I0313 20:50:06.999524 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1917286-7b0a-46c8-a296-fab758373bc5" (UID: "d1917286-7b0a-46c8-a296-fab758373bc5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.011113 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-scripts" (OuterVolumeSpecName: "scripts") pod "d1917286-7b0a-46c8-a296-fab758373bc5" (UID: "d1917286-7b0a-46c8-a296-fab758373bc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.016130 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1917286-7b0a-46c8-a296-fab758373bc5-kube-api-access-4lvdh" (OuterVolumeSpecName: "kube-api-access-4lvdh") pod "d1917286-7b0a-46c8-a296-fab758373bc5" (UID: "d1917286-7b0a-46c8-a296-fab758373bc5"). InnerVolumeSpecName "kube-api-access-4lvdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.060050 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1917286-7b0a-46c8-a296-fab758373bc5" (UID: "d1917286-7b0a-46c8-a296-fab758373bc5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.104559 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.104762 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.105136 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1917286-7b0a-46c8-a296-fab758373bc5-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.105227 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lvdh\" (UniqueName: \"kubernetes.io/projected/d1917286-7b0a-46c8-a296-fab758373bc5-kube-api-access-4lvdh\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.105310 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.149127 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1917286-7b0a-46c8-a296-fab758373bc5" (UID: "d1917286-7b0a-46c8-a296-fab758373bc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.209677 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.302981 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.302998 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1917286-7b0a-46c8-a296-fab758373bc5","Type":"ContainerDied","Data":"39d62eb099cd48da730bbb44f19a389cc783684a5ef07b5d39efe8edddec8a30"} Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.303077 5029 scope.go:117] "RemoveContainer" containerID="1a8f30b2ad28824fb0ca56e6be9a0dc1f59afacd4b64b809aa301a484a6476fc" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.311185 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-94bcffbb7-lqxc5" event={"ID":"d145e01e-08f4-42f3-b239-86e0abcb2ec1","Type":"ContainerStarted","Data":"0776e6664fc84ffc34b0e0277048459769b4d93520326de4143337e549ed4fc3"} Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.311269 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.372157 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-config-data" (OuterVolumeSpecName: "config-data") pod "d1917286-7b0a-46c8-a296-fab758373bc5" (UID: "d1917286-7b0a-46c8-a296-fab758373bc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.426195 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16129875-de71-41c7-8c75-17a279ded4b3-etc-machine-id\") pod \"16129875-de71-41c7-8c75-17a279ded4b3\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.426287 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-combined-ca-bundle\") pod \"16129875-de71-41c7-8c75-17a279ded4b3\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.426321 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-scripts\") pod \"16129875-de71-41c7-8c75-17a279ded4b3\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.426387 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data\") pod \"16129875-de71-41c7-8c75-17a279ded4b3\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.426550 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16129875-de71-41c7-8c75-17a279ded4b3-logs\") pod \"16129875-de71-41c7-8c75-17a279ded4b3\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.426582 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6smh8\" (UniqueName: \"kubernetes.io/projected/16129875-de71-41c7-8c75-17a279ded4b3-kube-api-access-6smh8\") pod \"16129875-de71-41c7-8c75-17a279ded4b3\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.426617 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data-custom\") pod \"16129875-de71-41c7-8c75-17a279ded4b3\" (UID: \"16129875-de71-41c7-8c75-17a279ded4b3\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.427351 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1917286-7b0a-46c8-a296-fab758373bc5-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.428314 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16129875-de71-41c7-8c75-17a279ded4b3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "16129875-de71-41c7-8c75-17a279ded4b3" (UID: "16129875-de71-41c7-8c75-17a279ded4b3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.428764 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16129875-de71-41c7-8c75-17a279ded4b3-logs" (OuterVolumeSpecName: "logs") pod "16129875-de71-41c7-8c75-17a279ded4b3" (UID: "16129875-de71-41c7-8c75-17a279ded4b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.433050 5029 scope.go:117] "RemoveContainer" containerID="7deed472b980644a32d4a72d08df592db249ce7d8e5af3ce0022018125b1fcb7" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.437645 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-scripts" (OuterVolumeSpecName: "scripts") pod "16129875-de71-41c7-8c75-17a279ded4b3" (UID: "16129875-de71-41c7-8c75-17a279ded4b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.448647 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16129875-de71-41c7-8c75-17a279ded4b3-kube-api-access-6smh8" (OuterVolumeSpecName: "kube-api-access-6smh8") pod "16129875-de71-41c7-8c75-17a279ded4b3" (UID: "16129875-de71-41c7-8c75-17a279ded4b3"). InnerVolumeSpecName "kube-api-access-6smh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.448917 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "16129875-de71-41c7-8c75-17a279ded4b3" (UID: "16129875-de71-41c7-8c75-17a279ded4b3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.516537 5029 scope.go:117] "RemoveContainer" containerID="fff6849dd1ebc77e403bc05cdb405ff112a79886b97960311b8416316fd3da19" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.530955 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16129875-de71-41c7-8c75-17a279ded4b3-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.530980 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6smh8\" (UniqueName: \"kubernetes.io/projected/16129875-de71-41c7-8c75-17a279ded4b3-kube-api-access-6smh8\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.530992 5029 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.531000 5029 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16129875-de71-41c7-8c75-17a279ded4b3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.531008 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.538683 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16129875-de71-41c7-8c75-17a279ded4b3" (UID: "16129875-de71-41c7-8c75-17a279ded4b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.545896 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.631826 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-combined-ca-bundle\") pod \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.631966 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-etc-machine-id\") pod \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.632058 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-scripts\") pod \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.632114 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data\") pod \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.632185 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxrhl\" (UniqueName: \"kubernetes.io/projected/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-kube-api-access-bxrhl\") pod \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.632215 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data-custom\") pod \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\" (UID: \"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8\") " Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.632105 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" (UID: "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.632743 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.632757 5029 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.651323 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-scripts" (OuterVolumeSpecName: "scripts") pod "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" (UID: "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.662183 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-kube-api-access-bxrhl" (OuterVolumeSpecName: "kube-api-access-bxrhl") pod "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" (UID: "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8"). InnerVolumeSpecName "kube-api-access-bxrhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.664403 5029 scope.go:117] "RemoveContainer" containerID="4345c598e15dc4d1cc25b892f3080074163d3b543b379f0e51f46a057110cf45" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.664406 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" (UID: "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.668998 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data" (OuterVolumeSpecName: "config-data") pod "16129875-de71-41c7-8c75-17a279ded4b3" (UID: "16129875-de71-41c7-8c75-17a279ded4b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.737931 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16129875-de71-41c7-8c75-17a279ded4b3-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.737978 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.737988 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxrhl\" (UniqueName: \"kubernetes.io/projected/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-kube-api-access-bxrhl\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.737998 5029 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.768603 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557250-jwpms"] Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.857447 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" (UID: "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.947061 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:07 crc kubenswrapper[5029]: I0313 20:50:07.971083 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data" (OuterVolumeSpecName: "config-data") pod "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" (UID: "8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.049160 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.061042 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.131132 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.150644 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-config\") pod \"da8a5250-75de-4986-ab96-2415b667cac1\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.151082 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-ovndb-tls-certs\") pod \"da8a5250-75de-4986-ab96-2415b667cac1\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.151207 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqhf8\" (UniqueName: \"kubernetes.io/projected/da8a5250-75de-4986-ab96-2415b667cac1-kube-api-access-mqhf8\") pod \"da8a5250-75de-4986-ab96-2415b667cac1\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.151311 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-combined-ca-bundle\") pod \"da8a5250-75de-4986-ab96-2415b667cac1\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.151595 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-httpd-config\") pod \"da8a5250-75de-4986-ab96-2415b667cac1\" (UID: \"da8a5250-75de-4986-ab96-2415b667cac1\") " Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.157355 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.166138 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "da8a5250-75de-4986-ab96-2415b667cac1" (UID: "da8a5250-75de-4986-ab96-2415b667cac1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.166595 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8a5250-75de-4986-ab96-2415b667cac1-kube-api-access-mqhf8" (OuterVolumeSpecName: "kube-api-access-mqhf8") pod "da8a5250-75de-4986-ab96-2415b667cac1" (UID: "da8a5250-75de-4986-ab96-2415b667cac1"). InnerVolumeSpecName "kube-api-access-mqhf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.177176 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.177939 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8a5250-75de-4986-ab96-2415b667cac1" containerName="neutron-httpd" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.178062 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8a5250-75de-4986-ab96-2415b667cac1" containerName="neutron-httpd" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.178161 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8a5250-75de-4986-ab96-2415b667cac1" containerName="neutron-api" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.178235 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8a5250-75de-4986-ab96-2415b667cac1" containerName="neutron-api" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.178340 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="sg-core" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.178410 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="sg-core" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.178487 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.178557 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.178654 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="proxy-httpd" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.178735 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="proxy-httpd" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.178819 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerName="probe" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.181903 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerName="probe" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.182084 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="ceilometer-notification-agent" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.182157 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="ceilometer-notification-agent" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.182240 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api-log" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.182331 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api-log" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.182394 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="ceilometer-central-agent" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.182453 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="ceilometer-central-agent" Mar 13 20:50:08 crc kubenswrapper[5029]: E0313 20:50:08.182517 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerName="manila-scheduler" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.182572 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerName="manila-scheduler" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.187163 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerName="probe" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.187412 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.187490 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="16129875-de71-41c7-8c75-17a279ded4b3" containerName="cinder-api-log" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.187564 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8a5250-75de-4986-ab96-2415b667cac1" containerName="neutron-api" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.187658 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" containerName="manila-scheduler" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.187740 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="sg-core" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.187893 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="proxy-httpd" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.187987 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="ceilometer-notification-agent" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.188088 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="ceilometer-central-agent" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.188169 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8a5250-75de-4986-ab96-2415b667cac1" containerName="neutron-httpd" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.210361 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.210636 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.213784 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.214066 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.254677 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.254761 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-config-data\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.254962 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-scripts\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.255151 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.255182 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrxws\" (UniqueName: \"kubernetes.io/projected/995e8918-fc5c-4cfb-9306-6c4953b72c03-kube-api-access-xrxws\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.255206 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-run-httpd\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.255273 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-log-httpd\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.255357 5029 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.255375 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqhf8\" (UniqueName: \"kubernetes.io/projected/da8a5250-75de-4986-ab96-2415b667cac1-kube-api-access-mqhf8\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.338369 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da8a5250-75de-4986-ab96-2415b667cac1" (UID: "da8a5250-75de-4986-ab96-2415b667cac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.338501 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8","Type":"ContainerDied","Data":"84b0bdf23587374e634fa02550e823eb25d9fb70a5ae5fb84d9fe52be727d114"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.339033 5029 scope.go:117] "RemoveContainer" containerID="2d2df07b6bd1464fe4775d4aeab6f5aa768daaa057845bbf1759af6f34ed81f1" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.340221 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.341779 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fa553312-0146-41c1-bc2e-9147af234ac8","Type":"ContainerStarted","Data":"4271bb5da4abe39f1ff1fdaddfa5432c0b271cdd61c79f300d19734b1a1b87ef"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.345999 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9813a0d8-78d8-41ea-a5af-b57454a8e0a0","Type":"ContainerStarted","Data":"d0471cb6810af61b9451da466f0de31ffb895c5b9ad535765ba9993d46c02563"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.352690 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-94bcffbb7-lqxc5" event={"ID":"d145e01e-08f4-42f3-b239-86e0abcb2ec1","Type":"ContainerStarted","Data":"77b5f1f689a0626d291992af591c29304b751bc02f6f4ff01bcd8c8ae62f2163"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.352736 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-94bcffbb7-lqxc5" event={"ID":"d145e01e-08f4-42f3-b239-86e0abcb2ec1","Type":"ContainerStarted","Data":"149def206f8e8e852bf39734a573c283f5faf5ae76678cccbc3d51cd27c2bfbf"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.354024 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.354303 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-config" (OuterVolumeSpecName: "config") pod "da8a5250-75de-4986-ab96-2415b667cac1" (UID: "da8a5250-75de-4986-ab96-2415b667cac1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.354306 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359100 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359145 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrxws\" (UniqueName: \"kubernetes.io/projected/995e8918-fc5c-4cfb-9306-6c4953b72c03-kube-api-access-xrxws\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359172 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-run-httpd\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359219 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-log-httpd\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359240 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359276 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-config-data\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359359 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-scripts\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359421 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.359435 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.368435 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-log-httpd\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.369572 5029 scope.go:117] "RemoveContainer" containerID="1af44e7a4cbdd631c38b1f259f5eccd45c213b33e2b3d60e298a9444b970e01a" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.371828 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cf8f459d4-bj2jk" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.371823 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cf8f459d4-bj2jk" event={"ID":"da8a5250-75de-4986-ab96-2415b667cac1","Type":"ContainerDied","Data":"117c343948f7843f948c57b22ae398dec5b30e91c41cfabaafd081a87609765f"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.372700 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-run-httpd\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.375238 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557250-jwpms" event={"ID":"274b7405-641b-4d9c-90b6-7bc8d511d5ea","Type":"ContainerStarted","Data":"32ca6b46f128fa6ff9144af0c9f3d14443d4f3b802ca452864f34be3357ed3f2"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.378920 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.517380094 podStartE2EDuration="17.378889262s" podCreationTimestamp="2026-03-13 20:49:51 +0000 UTC" firstStartedPulling="2026-03-13 20:49:53.006603262 +0000 UTC m=+1353.022685655" lastFinishedPulling="2026-03-13 20:50:06.86811242 +0000 UTC m=+1366.884194823" observedRunningTime="2026-03-13 20:50:08.367962114 +0000 UTC m=+1368.384044517" watchObservedRunningTime="2026-03-13 20:50:08.378889262 +0000 UTC m=+1368.394971675" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.383047 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-scripts\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.390545 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.391379 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.392996 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-config-data\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.402906 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "da8a5250-75de-4986-ab96-2415b667cac1" (UID: "da8a5250-75de-4986-ab96-2415b667cac1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.404182 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"42ae9c192c95047ca08bd80103ba761f255a1bb01b61e6cc285f78d6d6c0169b"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.408352 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16129875-de71-41c7-8c75-17a279ded4b3","Type":"ContainerDied","Data":"d552748531ba5cfda4eb07732b3b00790763c9bc4dc06536e299bb65d1f3d829"} Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.408482 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.411749 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=11.179882571 podStartE2EDuration="28.411728668s" podCreationTimestamp="2026-03-13 20:49:40 +0000 UTC" firstStartedPulling="2026-03-13 20:49:42.019049173 +0000 UTC m=+1342.035131576" lastFinishedPulling="2026-03-13 20:49:59.25089527 +0000 UTC m=+1359.266977673" observedRunningTime="2026-03-13 20:50:08.407907013 +0000 UTC m=+1368.423989436" watchObservedRunningTime="2026-03-13 20:50:08.411728668 +0000 UTC m=+1368.427811071" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.412411 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrxws\" (UniqueName: \"kubernetes.io/projected/995e8918-fc5c-4cfb-9306-6c4953b72c03-kube-api-access-xrxws\") pod \"ceilometer-0\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.414041 5029 scope.go:117] "RemoveContainer" containerID="ce6e450fa61563912ee229fa5e741c1f13eb1c053664e5a8dcb5162c835a2236" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.487541 5029 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da8a5250-75de-4986-ab96-2415b667cac1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.490815 5029 scope.go:117] "RemoveContainer" containerID="3c0571ae25d9f6ddcd432dbf2e81e8055f5c32434c78aea3186d589972cd419a" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.510266 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-94bcffbb7-lqxc5" podStartSLOduration=13.510232466 podStartE2EDuration="13.510232466s" podCreationTimestamp="2026-03-13 20:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:50:08.448174983 +0000 UTC m=+1368.464257406" watchObservedRunningTime="2026-03-13 20:50:08.510232466 +0000 UTC m=+1368.526314889" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.529827 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.533564 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.539681 5029 scope.go:117] "RemoveContainer" containerID="8e3b1a9c865bfb968583822bac1c553c7c415163692c70679545117119809d02" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.546836 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.559185 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.562312 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.568052 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.632894 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8" path="/var/lib/kubelet/pods/8ac6dfa8-d3d9-4feb-abe5-929c7b2e1ea8/volumes" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.633602 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" path="/var/lib/kubelet/pods/d1917286-7b0a-46c8-a296-fab758373bc5/volumes" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.636342 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.636394 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.644174 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.652936 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.654908 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.657176 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.658055 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.658344 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.659175 5029 scope.go:117] "RemoveContainer" containerID="45b58aa69edc5e33cb8fb7a9bdf0eab5993df519135f2223463872501fad3d31" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.669635 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.694961 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.695155 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-config-data\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.695292 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqz4h\" (UniqueName: \"kubernetes.io/projected/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-kube-api-access-dqz4h\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.695446 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.695653 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-scripts\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.695696 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.797533 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.797915 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-config-data\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.797968 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.797982 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cf8f459d4-bj2jk"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.797999 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-config-data\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798135 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ffb2426-fbfd-4856-a679-649eac82c558-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798174 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqz4h\" (UniqueName: \"kubernetes.io/projected/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-kube-api-access-dqz4h\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798215 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55rv\" (UniqueName: \"kubernetes.io/projected/2ffb2426-fbfd-4856-a679-649eac82c558-kube-api-access-z55rv\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798281 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-scripts\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798303 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798362 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798397 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ffb2426-fbfd-4856-a679-649eac82c558-logs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798461 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798512 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-config-data-custom\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798529 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-scripts\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.798551 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.800213 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.809025 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7cf8f459d4-bj2jk"] Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.810746 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-config-data\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.818523 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.827406 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-scripts\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.829983 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqz4h\" (UniqueName: \"kubernetes.io/projected/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-kube-api-access-dqz4h\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.830564 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf\") " pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.900746 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.901966 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-scripts\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.901997 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.902053 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ffb2426-fbfd-4856-a679-649eac82c558-logs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.902104 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.902146 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-config-data-custom\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.902219 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.902266 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-config-data\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.902368 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ffb2426-fbfd-4856-a679-649eac82c558-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.903455 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55rv\" (UniqueName: \"kubernetes.io/projected/2ffb2426-fbfd-4856-a679-649eac82c558-kube-api-access-z55rv\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.909184 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.909549 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.910414 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ffb2426-fbfd-4856-a679-649eac82c558-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.913045 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ffb2426-fbfd-4856-a679-649eac82c558-logs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.913436 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-config-data-custom\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.928553 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.930488 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-config-data\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.930753 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ffb2426-fbfd-4856-a679-649eac82c558-scripts\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:08 crc kubenswrapper[5029]: I0313 20:50:08.938410 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55rv\" (UniqueName: \"kubernetes.io/projected/2ffb2426-fbfd-4856-a679-649eac82c558-kube-api-access-z55rv\") pod \"cinder-api-0\" (UID: \"2ffb2426-fbfd-4856-a679-649eac82c558\") " pod="openstack/cinder-api-0" Mar 13 20:50:09 crc kubenswrapper[5029]: I0313 20:50:09.071559 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 13 20:50:09 crc kubenswrapper[5029]: I0313 20:50:09.207619 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:09 crc kubenswrapper[5029]: W0313 20:50:09.243902 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod995e8918_fc5c_4cfb_9306_6c4953b72c03.slice/crio-78ea453f73764299983e292db5610ee54fe279ece013345a632b18bb94438d0a WatchSource:0}: Error finding container 78ea453f73764299983e292db5610ee54fe279ece013345a632b18bb94438d0a: Status 404 returned error can't find the container with id 78ea453f73764299983e292db5610ee54fe279ece013345a632b18bb94438d0a Mar 13 20:50:09 crc kubenswrapper[5029]: I0313 20:50:09.444419 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 13 20:50:09 crc kubenswrapper[5029]: I0313 20:50:09.448206 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerStarted","Data":"78ea453f73764299983e292db5610ee54fe279ece013345a632b18bb94438d0a"} Mar 13 20:50:09 crc kubenswrapper[5029]: I0313 20:50:09.500399 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557250-jwpms" podStartSLOduration=8.547067221 podStartE2EDuration="9.500372532s" podCreationTimestamp="2026-03-13 20:50:00 +0000 UTC" firstStartedPulling="2026-03-13 20:50:07.878024926 +0000 UTC m=+1367.894107329" lastFinishedPulling="2026-03-13 20:50:08.831330237 +0000 UTC m=+1368.847412640" observedRunningTime="2026-03-13 20:50:09.476707337 +0000 UTC m=+1369.492789730" watchObservedRunningTime="2026-03-13 20:50:09.500372532 +0000 UTC m=+1369.516454935" Mar 13 20:50:09 crc kubenswrapper[5029]: I0313 20:50:09.648583 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 13 20:50:09 crc kubenswrapper[5029]: W0313 20:50:09.649801 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ffb2426_fbfd_4856_a679_649eac82c558.slice/crio-fec2363e3a8d958773fdf36fcb9695b979b2c576d239a502bcb7e1f3a6d8c1aa WatchSource:0}: Error finding container fec2363e3a8d958773fdf36fcb9695b979b2c576d239a502bcb7e1f3a6d8c1aa: Status 404 returned error can't find the container with id fec2363e3a8d958773fdf36fcb9695b979b2c576d239a502bcb7e1f3a6d8c1aa Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.354447 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.508467 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf","Type":"ContainerStarted","Data":"0912e50d922272dbbb88bdaa3e3c368b94f6adf9afae17f7c87dc250dc15fe59"} Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.508539 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf","Type":"ContainerStarted","Data":"2e1e3e011ae97b1fb5027fb26e0412e43a9bb7594bb147290aaa1ea83185fd91"} Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.513955 5029 generic.go:334] "Generic (PLEG): container finished" podID="274b7405-641b-4d9c-90b6-7bc8d511d5ea" containerID="5b2d1f7d891c6b0bf77f8478c6879a938efd6f0883c65cf39020967e8fb32c79" exitCode=0 Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.514043 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557250-jwpms" event={"ID":"274b7405-641b-4d9c-90b6-7bc8d511d5ea","Type":"ContainerDied","Data":"5b2d1f7d891c6b0bf77f8478c6879a938efd6f0883c65cf39020967e8fb32c79"} Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.526337 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2ffb2426-fbfd-4856-a679-649eac82c558","Type":"ContainerStarted","Data":"fec2363e3a8d958773fdf36fcb9695b979b2c576d239a502bcb7e1f3a6d8c1aa"} Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.549196 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerStarted","Data":"765d9464bd38ad3c79a731bc9be269b8e5710fd37d2167899b0be9be1244a0f8"} Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.662632 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16129875-de71-41c7-8c75-17a279ded4b3" path="/var/lib/kubelet/pods/16129875-de71-41c7-8c75-17a279ded4b3/volumes" Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.663441 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8a5250-75de-4986-ab96-2415b667cac1" path="/var/lib/kubelet/pods/da8a5250-75de-4986-ab96-2415b667cac1/volumes" Mar 13 20:50:10 crc kubenswrapper[5029]: I0313 20:50:10.686082 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Mar 13 20:50:11 crc kubenswrapper[5029]: I0313 20:50:11.567449 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerStarted","Data":"fd2bb92a640e831db5b69a6b44e3fb1097befb1d7e11180e2a6cb099bcb4e1c3"} Mar 13 20:50:11 crc kubenswrapper[5029]: I0313 20:50:11.575875 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf","Type":"ContainerStarted","Data":"3363fc7abcda70eb8a06b3dc0114b35d4fce3c1201415bb47d696e680f63cb47"} Mar 13 20:50:11 crc kubenswrapper[5029]: I0313 20:50:11.584621 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2ffb2426-fbfd-4856-a679-649eac82c558","Type":"ContainerStarted","Data":"ae1f0c79b2925ccedc775ce2301482ea98979c7a9644feca98a3bf18a9ec6353"} Mar 13 20:50:11 crc kubenswrapper[5029]: I0313 20:50:11.609424 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.609405898 podStartE2EDuration="3.609405898s" podCreationTimestamp="2026-03-13 20:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:50:11.603495067 +0000 UTC m=+1371.619577490" watchObservedRunningTime="2026-03-13 20:50:11.609405898 +0000 UTC m=+1371.625488301" Mar 13 20:50:11 crc kubenswrapper[5029]: I0313 20:50:11.994537 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.276831 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557250-jwpms" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.320830 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvps9\" (UniqueName: \"kubernetes.io/projected/274b7405-641b-4d9c-90b6-7bc8d511d5ea-kube-api-access-cvps9\") pod \"274b7405-641b-4d9c-90b6-7bc8d511d5ea\" (UID: \"274b7405-641b-4d9c-90b6-7bc8d511d5ea\") " Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.341954 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274b7405-641b-4d9c-90b6-7bc8d511d5ea-kube-api-access-cvps9" (OuterVolumeSpecName: "kube-api-access-cvps9") pod "274b7405-641b-4d9c-90b6-7bc8d511d5ea" (UID: "274b7405-641b-4d9c-90b6-7bc8d511d5ea"). InnerVolumeSpecName "kube-api-access-cvps9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.434401 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvps9\" (UniqueName: \"kubernetes.io/projected/274b7405-641b-4d9c-90b6-7bc8d511d5ea-kube-api-access-cvps9\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.662629 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557244-t2n87"] Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.662679 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557244-t2n87"] Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.666630 5029 generic.go:334] "Generic (PLEG): container finished" podID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerID="a6ab2709590ed237e109db82ee33f472cd645e5b45627f0cfb90ef7afafbc2dc" exitCode=137 Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.666704 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6c6bfdcb-59kpl" event={"ID":"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9","Type":"ContainerDied","Data":"a6ab2709590ed237e109db82ee33f472cd645e5b45627f0cfb90ef7afafbc2dc"} Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.691479 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557250-jwpms" event={"ID":"274b7405-641b-4d9c-90b6-7bc8d511d5ea","Type":"ContainerDied","Data":"32ca6b46f128fa6ff9144af0c9f3d14443d4f3b802ca452864f34be3357ed3f2"} Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.691727 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ca6b46f128fa6ff9144af0c9f3d14443d4f3b802ca452864f34be3357ed3f2" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.691878 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557250-jwpms" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.721503 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2ffb2426-fbfd-4856-a679-649eac82c558","Type":"ContainerStarted","Data":"57f7e3d742c94b8331adc9267f0952ac17cc068c222398c5907ab28c2b0b1721"} Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.721558 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.747480 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.74745432 podStartE2EDuration="4.74745432s" podCreationTimestamp="2026-03-13 20:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:50:12.741508458 +0000 UTC m=+1372.757590871" watchObservedRunningTime="2026-03-13 20:50:12.74745432 +0000 UTC m=+1372.763536723" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.873288 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.949045 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-logs\") pod \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.949126 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-combined-ca-bundle\") pod \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.949210 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-scripts\") pod \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.949318 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-tls-certs\") pod \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.949389 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-config-data\") pod \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.949424 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7s2l\" (UniqueName: \"kubernetes.io/projected/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-kube-api-access-l7s2l\") pod \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.949496 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-secret-key\") pod \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\" (UID: \"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9\") " Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.951054 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-logs" (OuterVolumeSpecName: "logs") pod "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" (UID: "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.955335 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-kube-api-access-l7s2l" (OuterVolumeSpecName: "kube-api-access-l7s2l") pod "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" (UID: "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9"). InnerVolumeSpecName "kube-api-access-l7s2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.956660 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" (UID: "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.981847 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-config-data" (OuterVolumeSpecName: "config-data") pod "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" (UID: "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:50:12 crc kubenswrapper[5029]: I0313 20:50:12.998976 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" (UID: "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:12.999989 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-scripts" (OuterVolumeSpecName: "scripts") pod "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" (UID: "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.023238 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" (UID: "9208e2d5-599e-46f6-b6df-4b4f09fbc5c9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.052842 5029 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.052900 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.052910 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7s2l\" (UniqueName: \"kubernetes.io/projected/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-kube-api-access-l7s2l\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.052922 5029 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.052931 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.052939 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.052947 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.073869 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.130145 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.720912 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.721473 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-log" containerID="cri-o://ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4" gracePeriod=30 Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.721555 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-httpd" containerID="cri-o://8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584" gracePeriod=30 Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.762807 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerStarted","Data":"2b49fa28a417016614a016c15290930adc5bccfc387028874202f35251955251"} Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.766575 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6c6bfdcb-59kpl" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.771098 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6c6bfdcb-59kpl" event={"ID":"9208e2d5-599e-46f6-b6df-4b4f09fbc5c9","Type":"ContainerDied","Data":"f106ebf69c7299c21c5c7753bbc2818c14a5316ad6d639ccf592a076187cb946"} Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.771635 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerName="manila-share" containerID="cri-o://8979b2347ffdf7edc624639bd2f223ea6a2978f06c16c94d7c5946ea3f413825" gracePeriod=30 Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.773417 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerName="probe" containerID="cri-o://d0471cb6810af61b9451da466f0de31ffb895c5b9ad535765ba9993d46c02563" gracePeriod=30 Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.777034 5029 scope.go:117] "RemoveContainer" containerID="f6af4dac6417db6513b5e2602d7469ad832100f41291a81205b348b878058d2d" Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.817453 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6c6bfdcb-59kpl"] Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.831048 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f6c6bfdcb-59kpl"] Mar 13 20:50:13 crc kubenswrapper[5029]: I0313 20:50:13.992294 5029 scope.go:117] "RemoveContainer" containerID="a6ab2709590ed237e109db82ee33f472cd645e5b45627f0cfb90ef7afafbc2dc" Mar 13 20:50:14 crc kubenswrapper[5029]: I0313 20:50:14.621060 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" path="/var/lib/kubelet/pods/9208e2d5-599e-46f6-b6df-4b4f09fbc5c9/volumes" Mar 13 20:50:14 crc kubenswrapper[5029]: I0313 20:50:14.622128 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea489c47-d9a5-433d-ae81-17d2a22b8b45" path="/var/lib/kubelet/pods/ea489c47-d9a5-433d-ae81-17d2a22b8b45/volumes" Mar 13 20:50:14 crc kubenswrapper[5029]: I0313 20:50:14.778812 5029 generic.go:334] "Generic (PLEG): container finished" podID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerID="d0471cb6810af61b9451da466f0de31ffb895c5b9ad535765ba9993d46c02563" exitCode=0 Mar 13 20:50:14 crc kubenswrapper[5029]: I0313 20:50:14.778885 5029 generic.go:334] "Generic (PLEG): container finished" podID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerID="8979b2347ffdf7edc624639bd2f223ea6a2978f06c16c94d7c5946ea3f413825" exitCode=1 Mar 13 20:50:14 crc kubenswrapper[5029]: I0313 20:50:14.779006 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9813a0d8-78d8-41ea-a5af-b57454a8e0a0","Type":"ContainerDied","Data":"d0471cb6810af61b9451da466f0de31ffb895c5b9ad535765ba9993d46c02563"} Mar 13 20:50:14 crc kubenswrapper[5029]: I0313 20:50:14.779045 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9813a0d8-78d8-41ea-a5af-b57454a8e0a0","Type":"ContainerDied","Data":"8979b2347ffdf7edc624639bd2f223ea6a2978f06c16c94d7c5946ea3f413825"} Mar 13 20:50:14 crc kubenswrapper[5029]: I0313 20:50:14.783477 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa59f852-51b9-4576-9935-401acd4199bf" containerID="ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4" exitCode=143 Mar 13 20:50:14 crc kubenswrapper[5029]: I0313 20:50:14.783535 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa59f852-51b9-4576-9935-401acd4199bf","Type":"ContainerDied","Data":"ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4"} Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.163140 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.231539 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-var-lib-manila\") pod \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.231630 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-combined-ca-bundle\") pod \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.231680 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "9813a0d8-78d8-41ea-a5af-b57454a8e0a0" (UID: "9813a0d8-78d8-41ea-a5af-b57454a8e0a0"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.231812 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data\") pod \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.231895 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-ceph\") pod \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.231950 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data-custom\") pod \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.231975 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-scripts\") pod \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.232051 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxpt7\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-kube-api-access-bxpt7\") pod \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.232108 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-etc-machine-id\") pod \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\" (UID: \"9813a0d8-78d8-41ea-a5af-b57454a8e0a0\") " Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.232457 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9813a0d8-78d8-41ea-a5af-b57454a8e0a0" (UID: "9813a0d8-78d8-41ea-a5af-b57454a8e0a0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.233136 5029 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.233160 5029 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-var-lib-manila\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.248010 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9813a0d8-78d8-41ea-a5af-b57454a8e0a0" (UID: "9813a0d8-78d8-41ea-a5af-b57454a8e0a0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.255635 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-ceph" (OuterVolumeSpecName: "ceph") pod "9813a0d8-78d8-41ea-a5af-b57454a8e0a0" (UID: "9813a0d8-78d8-41ea-a5af-b57454a8e0a0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.256131 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-scripts" (OuterVolumeSpecName: "scripts") pod "9813a0d8-78d8-41ea-a5af-b57454a8e0a0" (UID: "9813a0d8-78d8-41ea-a5af-b57454a8e0a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.257388 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-kube-api-access-bxpt7" (OuterVolumeSpecName: "kube-api-access-bxpt7") pod "9813a0d8-78d8-41ea-a5af-b57454a8e0a0" (UID: "9813a0d8-78d8-41ea-a5af-b57454a8e0a0"). InnerVolumeSpecName "kube-api-access-bxpt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.301836 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9813a0d8-78d8-41ea-a5af-b57454a8e0a0" (UID: "9813a0d8-78d8-41ea-a5af-b57454a8e0a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.335692 5029 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-ceph\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.335754 5029 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.335916 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.335932 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxpt7\" (UniqueName: \"kubernetes.io/projected/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-kube-api-access-bxpt7\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.335942 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.381365 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data" (OuterVolumeSpecName: "config-data") pod "9813a0d8-78d8-41ea-a5af-b57454a8e0a0" (UID: "9813a0d8-78d8-41ea-a5af-b57454a8e0a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.438435 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9813a0d8-78d8-41ea-a5af-b57454a8e0a0-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.802878 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerStarted","Data":"e3f7bee8f3e1092f47fcc97b99622974dff7ebc0254a485547e1a52b58f010eb"} Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.803124 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="ceilometer-central-agent" containerID="cri-o://765d9464bd38ad3c79a731bc9be269b8e5710fd37d2167899b0be9be1244a0f8" gracePeriod=30 Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.804659 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.806465 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="proxy-httpd" containerID="cri-o://e3f7bee8f3e1092f47fcc97b99622974dff7ebc0254a485547e1a52b58f010eb" gracePeriod=30 Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.806640 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="sg-core" containerID="cri-o://2b49fa28a417016614a016c15290930adc5bccfc387028874202f35251955251" gracePeriod=30 Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.806706 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="ceilometer-notification-agent" containerID="cri-o://fd2bb92a640e831db5b69a6b44e3fb1097befb1d7e11180e2a6cb099bcb4e1c3" gracePeriod=30 Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.821787 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9813a0d8-78d8-41ea-a5af-b57454a8e0a0","Type":"ContainerDied","Data":"83f6d3066522b2084ff367f04d2129ce71a1d2b162f74b318bcf603b0dbe752c"} Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.821846 5029 scope.go:117] "RemoveContainer" containerID="d0471cb6810af61b9451da466f0de31ffb895c5b9ad535765ba9993d46c02563" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.822055 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.835578 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.257024968 podStartE2EDuration="7.83555407s" podCreationTimestamp="2026-03-13 20:50:08 +0000 UTC" firstStartedPulling="2026-03-13 20:50:09.246548257 +0000 UTC m=+1369.262630660" lastFinishedPulling="2026-03-13 20:50:14.825077359 +0000 UTC m=+1374.841159762" observedRunningTime="2026-03-13 20:50:15.831873389 +0000 UTC m=+1375.847955812" watchObservedRunningTime="2026-03-13 20:50:15.83555407 +0000 UTC m=+1375.851636473" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.876643 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.904764 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919112 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:50:15 crc kubenswrapper[5029]: E0313 20:50:15.919516 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919533 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" Mar 13 20:50:15 crc kubenswrapper[5029]: E0313 20:50:15.919547 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerName="probe" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919553 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerName="probe" Mar 13 20:50:15 crc kubenswrapper[5029]: E0313 20:50:15.919568 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerName="manila-share" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919574 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerName="manila-share" Mar 13 20:50:15 crc kubenswrapper[5029]: E0313 20:50:15.919585 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274b7405-641b-4d9c-90b6-7bc8d511d5ea" containerName="oc" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919591 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="274b7405-641b-4d9c-90b6-7bc8d511d5ea" containerName="oc" Mar 13 20:50:15 crc kubenswrapper[5029]: E0313 20:50:15.919605 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon-log" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919611 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon-log" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919806 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="274b7405-641b-4d9c-90b6-7bc8d511d5ea" containerName="oc" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919823 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919832 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerName="probe" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919843 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9208e2d5-599e-46f6-b6df-4b4f09fbc5c9" containerName="horizon-log" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.919872 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" containerName="manila-share" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.920999 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.925947 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.928169 5029 scope.go:117] "RemoveContainer" containerID="8979b2347ffdf7edc624639bd2f223ea6a2978f06c16c94d7c5946ea3f413825" Mar 13 20:50:15 crc kubenswrapper[5029]: I0313 20:50:15.934618 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.060151 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.060233 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjhh7\" (UniqueName: \"kubernetes.io/projected/1854a458-f657-4ddf-a316-e313a3403137-kube-api-access-fjhh7\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.060271 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1854a458-f657-4ddf-a316-e313a3403137-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.060292 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.060346 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1854a458-f657-4ddf-a316-e313a3403137-ceph\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.060363 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-config-data\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.060407 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1854a458-f657-4ddf-a316-e313a3403137-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.060441 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-scripts\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.076846 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.077180 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="53004b20-47d0-461d-b054-fb52f7a78770" containerName="glance-log" containerID="cri-o://c4af649cbd1fa3db80ce661da8c649767e18b13a2512ab095d54236f9b767c44" gracePeriod=30 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.077815 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="53004b20-47d0-461d-b054-fb52f7a78770" containerName="glance-httpd" containerID="cri-o://eedd795f1a4b53a18ee3b96fdd49986c1c746ca181f0e511ce6c35494e5c6c25" gracePeriod=30 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.155531 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.156946 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85c9b98d8-kzhp5" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162056 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162128 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjhh7\" (UniqueName: \"kubernetes.io/projected/1854a458-f657-4ddf-a316-e313a3403137-kube-api-access-fjhh7\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162158 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1854a458-f657-4ddf-a316-e313a3403137-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162179 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162224 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1854a458-f657-4ddf-a316-e313a3403137-ceph\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162240 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-config-data\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162273 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1854a458-f657-4ddf-a316-e313a3403137-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162300 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-scripts\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162751 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1854a458-f657-4ddf-a316-e313a3403137-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.162817 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1854a458-f657-4ddf-a316-e313a3403137-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.168369 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-config-data\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.169547 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1854a458-f657-4ddf-a316-e313a3403137-ceph\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.171174 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.171774 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-scripts\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.178515 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1854a458-f657-4ddf-a316-e313a3403137-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.190727 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjhh7\" (UniqueName: \"kubernetes.io/projected/1854a458-f657-4ddf-a316-e313a3403137-kube-api-access-fjhh7\") pod \"manila-share-share1-0\" (UID: \"1854a458-f657-4ddf-a316-e313a3403137\") " pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.271498 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.272116 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85bfd56bd4-bs6qf"] Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.272660 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85bfd56bd4-bs6qf" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerName="placement-api" containerID="cri-o://2756430389c114d884682574fa01ce0a3d40540564fd8713da5614cfc51abb29" gracePeriod=30 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.272806 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85bfd56bd4-bs6qf" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerName="placement-log" containerID="cri-o://fc64aef6dfbf66b739e8f304c9cfd646cfaa779da452629bad2eb084763b2b31" gracePeriod=30 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.275346 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.294607 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-94bcffbb7-lqxc5" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.613350 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9813a0d8-78d8-41ea-a5af-b57454a8e0a0" path="/var/lib/kubelet/pods/9813a0d8-78d8-41ea-a5af-b57454a8e0a0/volumes" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.851159 5029 generic.go:334] "Generic (PLEG): container finished" podID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerID="fc64aef6dfbf66b739e8f304c9cfd646cfaa779da452629bad2eb084763b2b31" exitCode=143 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.851300 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bfd56bd4-bs6qf" event={"ID":"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb","Type":"ContainerDied","Data":"fc64aef6dfbf66b739e8f304c9cfd646cfaa779da452629bad2eb084763b2b31"} Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.858330 5029 generic.go:334] "Generic (PLEG): container finished" podID="53004b20-47d0-461d-b054-fb52f7a78770" containerID="c4af649cbd1fa3db80ce661da8c649767e18b13a2512ab095d54236f9b767c44" exitCode=143 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.858420 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53004b20-47d0-461d-b054-fb52f7a78770","Type":"ContainerDied","Data":"c4af649cbd1fa3db80ce661da8c649767e18b13a2512ab095d54236f9b767c44"} Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.881118 5029 generic.go:334] "Generic (PLEG): container finished" podID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerID="e3f7bee8f3e1092f47fcc97b99622974dff7ebc0254a485547e1a52b58f010eb" exitCode=0 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.881152 5029 generic.go:334] "Generic (PLEG): container finished" podID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerID="2b49fa28a417016614a016c15290930adc5bccfc387028874202f35251955251" exitCode=2 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.881162 5029 generic.go:334] "Generic (PLEG): container finished" podID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerID="fd2bb92a640e831db5b69a6b44e3fb1097befb1d7e11180e2a6cb099bcb4e1c3" exitCode=0 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.881169 5029 generic.go:334] "Generic (PLEG): container finished" podID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerID="765d9464bd38ad3c79a731bc9be269b8e5710fd37d2167899b0be9be1244a0f8" exitCode=0 Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.881704 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerDied","Data":"e3f7bee8f3e1092f47fcc97b99622974dff7ebc0254a485547e1a52b58f010eb"} Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.881752 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerDied","Data":"2b49fa28a417016614a016c15290930adc5bccfc387028874202f35251955251"} Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.881767 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerDied","Data":"fd2bb92a640e831db5b69a6b44e3fb1097befb1d7e11180e2a6cb099bcb4e1c3"} Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.881778 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerDied","Data":"765d9464bd38ad3c79a731bc9be269b8e5710fd37d2167899b0be9be1244a0f8"} Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.912593 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": read tcp 10.217.0.2:46482->10.217.0.157:9292: read: connection reset by peer" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.912611 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": read tcp 10.217.0.2:46490->10.217.0.157:9292: read: connection reset by peer" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.917094 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.980135 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-run-httpd\") pod \"995e8918-fc5c-4cfb-9306-6c4953b72c03\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.980569 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-log-httpd\") pod \"995e8918-fc5c-4cfb-9306-6c4953b72c03\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.980622 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrxws\" (UniqueName: \"kubernetes.io/projected/995e8918-fc5c-4cfb-9306-6c4953b72c03-kube-api-access-xrxws\") pod \"995e8918-fc5c-4cfb-9306-6c4953b72c03\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.980722 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-sg-core-conf-yaml\") pod \"995e8918-fc5c-4cfb-9306-6c4953b72c03\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.980772 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-config-data\") pod \"995e8918-fc5c-4cfb-9306-6c4953b72c03\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.980843 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-combined-ca-bundle\") pod \"995e8918-fc5c-4cfb-9306-6c4953b72c03\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.980901 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-scripts\") pod \"995e8918-fc5c-4cfb-9306-6c4953b72c03\" (UID: \"995e8918-fc5c-4cfb-9306-6c4953b72c03\") " Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.980913 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "995e8918-fc5c-4cfb-9306-6c4953b72c03" (UID: "995e8918-fc5c-4cfb-9306-6c4953b72c03"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.986083 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "995e8918-fc5c-4cfb-9306-6c4953b72c03" (UID: "995e8918-fc5c-4cfb-9306-6c4953b72c03"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.987564 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:16 crc kubenswrapper[5029]: I0313 20:50:16.987633 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995e8918-fc5c-4cfb-9306-6c4953b72c03-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.033879 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.035449 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995e8918-fc5c-4cfb-9306-6c4953b72c03-kube-api-access-xrxws" (OuterVolumeSpecName: "kube-api-access-xrxws") pod "995e8918-fc5c-4cfb-9306-6c4953b72c03" (UID: "995e8918-fc5c-4cfb-9306-6c4953b72c03"). InnerVolumeSpecName "kube-api-access-xrxws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.042348 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-scripts" (OuterVolumeSpecName: "scripts") pod "995e8918-fc5c-4cfb-9306-6c4953b72c03" (UID: "995e8918-fc5c-4cfb-9306-6c4953b72c03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.072257 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "995e8918-fc5c-4cfb-9306-6c4953b72c03" (UID: "995e8918-fc5c-4cfb-9306-6c4953b72c03"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.101106 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.101151 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrxws\" (UniqueName: \"kubernetes.io/projected/995e8918-fc5c-4cfb-9306-6c4953b72c03-kube-api-access-xrxws\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.102236 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.225884 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "995e8918-fc5c-4cfb-9306-6c4953b72c03" (UID: "995e8918-fc5c-4cfb-9306-6c4953b72c03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.239126 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-config-data" (OuterVolumeSpecName: "config-data") pod "995e8918-fc5c-4cfb-9306-6c4953b72c03" (UID: "995e8918-fc5c-4cfb-9306-6c4953b72c03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.306448 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.306496 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995e8918-fc5c-4cfb-9306-6c4953b72c03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.638966 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.713564 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-config-data\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.713703 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-public-tls-certs\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.713742 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.713801 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frj5x\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-kube-api-access-frj5x\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.713908 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-ceph\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.714014 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-logs\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.714088 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-scripts\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.714140 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-httpd-run\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.714174 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-combined-ca-bundle\") pod \"fa59f852-51b9-4576-9935-401acd4199bf\" (UID: \"fa59f852-51b9-4576-9935-401acd4199bf\") " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.715696 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-logs" (OuterVolumeSpecName: "logs") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.720997 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-kube-api-access-frj5x" (OuterVolumeSpecName: "kube-api-access-frj5x") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "kube-api-access-frj5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.722557 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.725553 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-ceph" (OuterVolumeSpecName: "ceph") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.725804 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.743672 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-scripts" (OuterVolumeSpecName: "scripts") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.809066 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.816783 5029 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.816837 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frj5x\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-kube-api-access-frj5x\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.816870 5029 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa59f852-51b9-4576-9935-401acd4199bf-ceph\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.816882 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.816893 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.816904 5029 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa59f852-51b9-4576-9935-401acd4199bf-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.816913 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.837625 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-config-data" (OuterVolumeSpecName: "config-data") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.848183 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fa59f852-51b9-4576-9935-401acd4199bf" (UID: "fa59f852-51b9-4576-9935-401acd4199bf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.893659 5029 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.920055 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.920098 5029 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa59f852-51b9-4576-9935-401acd4199bf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.920111 5029 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.920288 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995e8918-fc5c-4cfb-9306-6c4953b72c03","Type":"ContainerDied","Data":"78ea453f73764299983e292db5610ee54fe279ece013345a632b18bb94438d0a"} Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.920352 5029 scope.go:117] "RemoveContainer" containerID="e3f7bee8f3e1092f47fcc97b99622974dff7ebc0254a485547e1a52b58f010eb" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.920726 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.927745 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa59f852-51b9-4576-9935-401acd4199bf" containerID="8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584" exitCode=0 Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.927964 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa59f852-51b9-4576-9935-401acd4199bf","Type":"ContainerDied","Data":"8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584"} Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.928035 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa59f852-51b9-4576-9935-401acd4199bf","Type":"ContainerDied","Data":"d3273d3c671238de4e406034553bc9f6128cfd306673207f95f191b0df7f0026"} Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.928186 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.948595 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1854a458-f657-4ddf-a316-e313a3403137","Type":"ContainerStarted","Data":"028f3bf74e078e932598eb83fe786c9ff69bcda99ba5c71639eb0dd2149ee982"} Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.948657 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1854a458-f657-4ddf-a316-e313a3403137","Type":"ContainerStarted","Data":"c7e0481dc0e95c3203d525661c84eaaf237d2ac37fec036d2141f5a2b27c91d1"} Mar 13 20:50:17 crc kubenswrapper[5029]: I0313 20:50:17.965048 5029 scope.go:117] "RemoveContainer" containerID="2b49fa28a417016614a016c15290930adc5bccfc387028874202f35251955251" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.007757 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.033588 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.063731 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.106207 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.111908 5029 scope.go:117] "RemoveContainer" containerID="fd2bb92a640e831db5b69a6b44e3fb1097befb1d7e11180e2a6cb099bcb4e1c3" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.131389 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:50:18 crc kubenswrapper[5029]: E0313 20:50:18.131820 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="ceilometer-central-agent" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.131840 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="ceilometer-central-agent" Mar 13 20:50:18 crc kubenswrapper[5029]: E0313 20:50:18.131865 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-httpd" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.131871 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-httpd" Mar 13 20:50:18 crc kubenswrapper[5029]: E0313 20:50:18.131878 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-log" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.131885 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-log" Mar 13 20:50:18 crc kubenswrapper[5029]: E0313 20:50:18.131905 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="proxy-httpd" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.131911 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="proxy-httpd" Mar 13 20:50:18 crc kubenswrapper[5029]: E0313 20:50:18.131930 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="sg-core" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.131936 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="sg-core" Mar 13 20:50:18 crc kubenswrapper[5029]: E0313 20:50:18.131949 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="ceilometer-notification-agent" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.131956 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="ceilometer-notification-agent" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.132135 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-log" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.132148 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="ceilometer-notification-agent" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.132159 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="sg-core" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.132178 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="ceilometer-central-agent" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.132200 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" containerName="proxy-httpd" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.132209 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa59f852-51b9-4576-9935-401acd4199bf" containerName="glance-httpd" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.133333 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.135647 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.136536 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.145836 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.147810 5029 scope.go:117] "RemoveContainer" containerID="765d9464bd38ad3c79a731bc9be269b8e5710fd37d2167899b0be9be1244a0f8" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.163305 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.166482 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.169988 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.174963 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.181997 5029 scope.go:117] "RemoveContainer" containerID="8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.201142 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.237573 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.237655 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.237875 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-config-data\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238104 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69b71985-d9f0-4b2c-85ea-b442ffb423c1-logs\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238140 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238159 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7fp\" (UniqueName: \"kubernetes.io/projected/7fd50413-3521-4a4b-9063-9a8728b0a7aa-kube-api-access-zk7fp\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238217 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238420 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-scripts\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238438 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238468 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238537 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-log-httpd\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238555 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238605 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/69b71985-d9f0-4b2c-85ea-b442ffb423c1-ceph\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238633 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-run-httpd\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238682 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt7gj\" (UniqueName: \"kubernetes.io/projected/69b71985-d9f0-4b2c-85ea-b442ffb423c1-kube-api-access-mt7gj\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.238727 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69b71985-d9f0-4b2c-85ea-b442ffb423c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.242169 5029 scope.go:117] "RemoveContainer" containerID="ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.298482 5029 scope.go:117] "RemoveContainer" containerID="8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584" Mar 13 20:50:18 crc kubenswrapper[5029]: E0313 20:50:18.299007 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584\": container with ID starting with 8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584 not found: ID does not exist" containerID="8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.299051 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584"} err="failed to get container status \"8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584\": rpc error: code = NotFound desc = could not find container \"8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584\": container with ID starting with 8e25676cee2477da76fa6b502da2abbbd99bd270b544c67b89a6706a9d735584 not found: ID does not exist" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.299087 5029 scope.go:117] "RemoveContainer" containerID="ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4" Mar 13 20:50:18 crc kubenswrapper[5029]: E0313 20:50:18.299608 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4\": container with ID starting with ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4 not found: ID does not exist" containerID="ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.299640 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4"} err="failed to get container status \"ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4\": rpc error: code = NotFound desc = could not find container \"ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4\": container with ID starting with ac7b32882b7de4ae8b93da7bea97f8dbba29c6958343c3f7d8cf36564003dec4 not found: ID does not exist" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340597 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340644 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340680 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-config-data\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340733 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69b71985-d9f0-4b2c-85ea-b442ffb423c1-logs\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340756 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340774 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk7fp\" (UniqueName: \"kubernetes.io/projected/7fd50413-3521-4a4b-9063-9a8728b0a7aa-kube-api-access-zk7fp\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340800 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340872 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-scripts\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340889 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340910 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340937 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-log-httpd\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340951 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340977 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/69b71985-d9f0-4b2c-85ea-b442ffb423c1-ceph\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.340995 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-run-httpd\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.341018 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt7gj\" (UniqueName: \"kubernetes.io/projected/69b71985-d9f0-4b2c-85ea-b442ffb423c1-kube-api-access-mt7gj\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.341039 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69b71985-d9f0-4b2c-85ea-b442ffb423c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.341598 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69b71985-d9f0-4b2c-85ea-b442ffb423c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.344358 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69b71985-d9f0-4b2c-85ea-b442ffb423c1-logs\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.344369 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-run-httpd\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.345181 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-log-httpd\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.345529 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.347311 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.350389 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/69b71985-d9f0-4b2c-85ea-b442ffb423c1-ceph\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.351460 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-config-data\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.356384 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-scripts\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.378641 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.378717 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt7gj\" (UniqueName: \"kubernetes.io/projected/69b71985-d9f0-4b2c-85ea-b442ffb423c1-kube-api-access-mt7gj\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.383655 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.388191 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.389606 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b71985-d9f0-4b2c-85ea-b442ffb423c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.395931 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk7fp\" (UniqueName: \"kubernetes.io/projected/7fd50413-3521-4a4b-9063-9a8728b0a7aa-kube-api-access-zk7fp\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.403759 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.424404 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"69b71985-d9f0-4b2c-85ea-b442ffb423c1\") " pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.456057 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.498391 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.622689 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995e8918-fc5c-4cfb-9306-6c4953b72c03" path="/var/lib/kubelet/pods/995e8918-fc5c-4cfb-9306-6c4953b72c03/volumes" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.624058 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa59f852-51b9-4576-9935-401acd4199bf" path="/var/lib/kubelet/pods/fa59f852-51b9-4576-9935-401acd4199bf/volumes" Mar 13 20:50:18 crc kubenswrapper[5029]: I0313 20:50:18.902243 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Mar 13 20:50:19 crc kubenswrapper[5029]: I0313 20:50:19.037018 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:19 crc kubenswrapper[5029]: W0313 20:50:19.050100 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd50413_3521_4a4b_9063_9a8728b0a7aa.slice/crio-66961f0baff97608dd14cfb4248c53ef051eeb29db7d8e29a0422cc5fb924099 WatchSource:0}: Error finding container 66961f0baff97608dd14cfb4248c53ef051eeb29db7d8e29a0422cc5fb924099: Status 404 returned error can't find the container with id 66961f0baff97608dd14cfb4248c53ef051eeb29db7d8e29a0422cc5fb924099 Mar 13 20:50:19 crc kubenswrapper[5029]: I0313 20:50:19.349263 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 13 20:50:20 crc kubenswrapper[5029]: I0313 20:50:19.999653 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerStarted","Data":"66961f0baff97608dd14cfb4248c53ef051eeb29db7d8e29a0422cc5fb924099"} Mar 13 20:50:20 crc kubenswrapper[5029]: I0313 20:50:20.011617 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69b71985-d9f0-4b2c-85ea-b442ffb423c1","Type":"ContainerStarted","Data":"0e0308e9185e7a51525df55d7c33f9d205c20972165bf81f8b39e2d6e78148bf"} Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.123062 5029 generic.go:334] "Generic (PLEG): container finished" podID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerID="2756430389c114d884682574fa01ce0a3d40540564fd8713da5614cfc51abb29" exitCode=0 Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.123713 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bfd56bd4-bs6qf" event={"ID":"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb","Type":"ContainerDied","Data":"2756430389c114d884682574fa01ce0a3d40540564fd8713da5614cfc51abb29"} Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.123742 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bfd56bd4-bs6qf" event={"ID":"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb","Type":"ContainerDied","Data":"eac05de69cf3415ff6ff831c5c2810d904da1ce1954426aaa4fa963d923d03ce"} Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.123756 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac05de69cf3415ff6ff831c5c2810d904da1ce1954426aaa4fa963d923d03ce" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.130990 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1854a458-f657-4ddf-a316-e313a3403137","Type":"ContainerStarted","Data":"5ecb445e549f970e9fc41fa53388cc76eb0be84f7d1f38253ab4e935a2eb9528"} Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.133563 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.136372 5029 generic.go:334] "Generic (PLEG): container finished" podID="53004b20-47d0-461d-b054-fb52f7a78770" containerID="eedd795f1a4b53a18ee3b96fdd49986c1c746ca181f0e511ce6c35494e5c6c25" exitCode=0 Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.136835 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53004b20-47d0-461d-b054-fb52f7a78770","Type":"ContainerDied","Data":"eedd795f1a4b53a18ee3b96fdd49986c1c746ca181f0e511ce6c35494e5c6c25"} Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.140435 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69b71985-d9f0-4b2c-85ea-b442ffb423c1","Type":"ContainerStarted","Data":"a3de2bd62cc9ed3df985a88bbef85cb094abfbde62c235537f4016bb21f1f465"} Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.168819 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=6.168792719 podStartE2EDuration="6.168792719s" podCreationTimestamp="2026-03-13 20:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:50:21.160129053 +0000 UTC m=+1381.176211466" watchObservedRunningTime="2026-03-13 20:50:21.168792719 +0000 UTC m=+1381.184875122" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.201420 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.372567 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-logs\") pod \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.372678 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-scripts\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.372717 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-httpd-run\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373218 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfdxz\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-kube-api-access-sfdxz\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373292 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-public-tls-certs\") pod \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373321 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgb8g\" (UniqueName: \"kubernetes.io/projected/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-kube-api-access-tgb8g\") pod \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373364 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-internal-tls-certs\") pod \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373357 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-logs" (OuterVolumeSpecName: "logs") pod "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" (UID: "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373395 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-combined-ca-bundle\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373433 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-config-data\") pod \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373472 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-config-data\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373508 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-ceph\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373529 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-logs\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373582 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-scripts\") pod \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373615 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-internal-tls-certs\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373640 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-combined-ca-bundle\") pod \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\" (UID: \"71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.373691 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"53004b20-47d0-461d-b054-fb52f7a78770\" (UID: \"53004b20-47d0-461d-b054-fb52f7a78770\") " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.375479 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.381949 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.382044 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-logs" (OuterVolumeSpecName: "logs") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.383165 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.397567 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-kube-api-access-tgb8g" (OuterVolumeSpecName: "kube-api-access-tgb8g") pod "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" (UID: "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb"). InnerVolumeSpecName "kube-api-access-tgb8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.403681 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-scripts" (OuterVolumeSpecName: "scripts") pod "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" (UID: "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.403773 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-ceph" (OuterVolumeSpecName: "ceph") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.403960 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-kube-api-access-sfdxz" (OuterVolumeSpecName: "kube-api-access-sfdxz") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "kube-api-access-sfdxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.411028 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-scripts" (OuterVolumeSpecName: "scripts") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.447129 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479772 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479809 5029 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-ceph\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479818 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479826 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479865 5029 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479874 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479882 5029 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53004b20-47d0-461d-b054-fb52f7a78770-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479890 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfdxz\" (UniqueName: \"kubernetes.io/projected/53004b20-47d0-461d-b054-fb52f7a78770-kube-api-access-sfdxz\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.479900 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgb8g\" (UniqueName: \"kubernetes.io/projected/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-kube-api-access-tgb8g\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.596013 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-config-data" (OuterVolumeSpecName: "config-data") pod "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" (UID: "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.597001 5029 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.633523 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.647060 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-config-data" (OuterVolumeSpecName: "config-data") pod "53004b20-47d0-461d-b054-fb52f7a78770" (UID: "53004b20-47d0-461d-b054-fb52f7a78770"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.662096 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.663107 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" (UID: "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.686526 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.686559 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.686570 5029 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53004b20-47d0-461d-b054-fb52f7a78770-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.686580 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.686589 5029 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.710967 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" (UID: "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.719987 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" (UID: "71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.788351 5029 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:21 crc kubenswrapper[5029]: I0313 20:50:21.788395 5029 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.151640 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53004b20-47d0-461d-b054-fb52f7a78770","Type":"ContainerDied","Data":"4be6bdb4e84dc9cdd1fca69ad7f61e7582251a07149c1b5da82de5bc929830a6"} Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.151962 5029 scope.go:117] "RemoveContainer" containerID="eedd795f1a4b53a18ee3b96fdd49986c1c746ca181f0e511ce6c35494e5c6c25" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.153128 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.158499 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerStarted","Data":"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d"} Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.169372 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85bfd56bd4-bs6qf" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.169993 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69b71985-d9f0-4b2c-85ea-b442ffb423c1","Type":"ContainerStarted","Data":"be11ca641b16bdc4888db8db382968f9d190f20704108d3bb0528a268f38bf87"} Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.184049 5029 scope.go:117] "RemoveContainer" containerID="c4af649cbd1fa3db80ce661da8c649767e18b13a2512ab095d54236f9b767c44" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.238785 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.238755284 podStartE2EDuration="5.238755284s" podCreationTimestamp="2026-03-13 20:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:50:22.220307791 +0000 UTC m=+1382.236390204" watchObservedRunningTime="2026-03-13 20:50:22.238755284 +0000 UTC m=+1382.254837687" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.256968 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.267579 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.282005 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85bfd56bd4-bs6qf"] Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.296000 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85bfd56bd4-bs6qf"] Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303164 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:50:22 crc kubenswrapper[5029]: E0313 20:50:22.303547 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53004b20-47d0-461d-b054-fb52f7a78770" containerName="glance-httpd" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303561 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="53004b20-47d0-461d-b054-fb52f7a78770" containerName="glance-httpd" Mar 13 20:50:22 crc kubenswrapper[5029]: E0313 20:50:22.303576 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerName="placement-api" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303584 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerName="placement-api" Mar 13 20:50:22 crc kubenswrapper[5029]: E0313 20:50:22.303595 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerName="placement-log" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303601 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerName="placement-log" Mar 13 20:50:22 crc kubenswrapper[5029]: E0313 20:50:22.303625 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53004b20-47d0-461d-b054-fb52f7a78770" containerName="glance-log" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303632 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="53004b20-47d0-461d-b054-fb52f7a78770" containerName="glance-log" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303889 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="53004b20-47d0-461d-b054-fb52f7a78770" containerName="glance-log" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303915 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="53004b20-47d0-461d-b054-fb52f7a78770" containerName="glance-httpd" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303935 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerName="placement-log" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.303945 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" containerName="placement-api" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.304964 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.328494 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.345135 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.347840 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.398985 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bdfe146-20b8-4a56-8a77-61affcc4e25f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.399051 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.399091 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.399115 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.399179 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bdfe146-20b8-4a56-8a77-61affcc4e25f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.399236 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.399265 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28ljv\" (UniqueName: \"kubernetes.io/projected/6bdfe146-20b8-4a56-8a77-61affcc4e25f-kube-api-access-28ljv\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.399311 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.399390 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6bdfe146-20b8-4a56-8a77-61affcc4e25f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502257 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bdfe146-20b8-4a56-8a77-61affcc4e25f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502325 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502364 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502397 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502465 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bdfe146-20b8-4a56-8a77-61affcc4e25f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502524 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502550 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28ljv\" (UniqueName: \"kubernetes.io/projected/6bdfe146-20b8-4a56-8a77-61affcc4e25f-kube-api-access-28ljv\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502595 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502666 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6bdfe146-20b8-4a56-8a77-61affcc4e25f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.502878 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bdfe146-20b8-4a56-8a77-61affcc4e25f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.503630 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.503721 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bdfe146-20b8-4a56-8a77-61affcc4e25f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.512657 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.518393 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6bdfe146-20b8-4a56-8a77-61affcc4e25f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.533149 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.535032 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.537253 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28ljv\" (UniqueName: \"kubernetes.io/projected/6bdfe146-20b8-4a56-8a77-61affcc4e25f-kube-api-access-28ljv\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.565915 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdfe146-20b8-4a56-8a77-61affcc4e25f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.590733 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bdfe146-20b8-4a56-8a77-61affcc4e25f\") " pod="openstack/glance-default-internal-api-0" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.626502 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53004b20-47d0-461d-b054-fb52f7a78770" path="/var/lib/kubelet/pods/53004b20-47d0-461d-b054-fb52f7a78770/volumes" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.629292 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb" path="/var/lib/kubelet/pods/71c9fd3c-ba44-46bd-ae7d-0bc3edc9b6eb/volumes" Mar 13 20:50:22 crc kubenswrapper[5029]: I0313 20:50:22.718435 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:23 crc kubenswrapper[5029]: I0313 20:50:23.193909 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerStarted","Data":"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a"} Mar 13 20:50:23 crc kubenswrapper[5029]: I0313 20:50:23.391629 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 13 20:50:23 crc kubenswrapper[5029]: I0313 20:50:23.582445 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 13 20:50:24 crc kubenswrapper[5029]: I0313 20:50:24.222691 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bdfe146-20b8-4a56-8a77-61affcc4e25f","Type":"ContainerStarted","Data":"4c26bd14e0eed84a025424e0c2b9961d5d022ed29cee43c4f4d95aa53ddfc151"} Mar 13 20:50:24 crc kubenswrapper[5029]: I0313 20:50:24.223636 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bdfe146-20b8-4a56-8a77-61affcc4e25f","Type":"ContainerStarted","Data":"78b616d327d5829a2662189f4605356e7d4a05fead71577b962a5baaadc5c53f"} Mar 13 20:50:24 crc kubenswrapper[5029]: I0313 20:50:24.234311 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerStarted","Data":"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd"} Mar 13 20:50:25 crc kubenswrapper[5029]: I0313 20:50:25.248805 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bdfe146-20b8-4a56-8a77-61affcc4e25f","Type":"ContainerStarted","Data":"a20a6ea91e0d0b0e0b01eec6a1de567221eea858fb2e7397cb948d349819ede0"} Mar 13 20:50:25 crc kubenswrapper[5029]: I0313 20:50:25.273364 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.273344724 podStartE2EDuration="3.273344724s" podCreationTimestamp="2026-03-13 20:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:50:25.269261052 +0000 UTC m=+1385.285343465" watchObservedRunningTime="2026-03-13 20:50:25.273344724 +0000 UTC m=+1385.289427117" Mar 13 20:50:26 crc kubenswrapper[5029]: I0313 20:50:26.261880 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerStarted","Data":"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a"} Mar 13 20:50:26 crc kubenswrapper[5029]: I0313 20:50:26.262208 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="ceilometer-central-agent" containerID="cri-o://11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d" gracePeriod=30 Mar 13 20:50:26 crc kubenswrapper[5029]: I0313 20:50:26.262261 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="sg-core" containerID="cri-o://acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd" gracePeriod=30 Mar 13 20:50:26 crc kubenswrapper[5029]: I0313 20:50:26.262262 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="ceilometer-notification-agent" containerID="cri-o://308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a" gracePeriod=30 Mar 13 20:50:26 crc kubenswrapper[5029]: I0313 20:50:26.262273 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="proxy-httpd" containerID="cri-o://2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a" gracePeriod=30 Mar 13 20:50:26 crc kubenswrapper[5029]: I0313 20:50:26.276138 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Mar 13 20:50:26 crc kubenswrapper[5029]: I0313 20:50:26.297295 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.769504089 podStartE2EDuration="9.297270302s" podCreationTimestamp="2026-03-13 20:50:17 +0000 UTC" firstStartedPulling="2026-03-13 20:50:19.05303273 +0000 UTC m=+1379.069115133" lastFinishedPulling="2026-03-13 20:50:25.580798943 +0000 UTC m=+1385.596881346" observedRunningTime="2026-03-13 20:50:26.289239193 +0000 UTC m=+1386.305321596" watchObservedRunningTime="2026-03-13 20:50:26.297270302 +0000 UTC m=+1386.313352695" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.110452 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.232423 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk7fp\" (UniqueName: \"kubernetes.io/projected/7fd50413-3521-4a4b-9063-9a8728b0a7aa-kube-api-access-zk7fp\") pod \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.232553 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-log-httpd\") pod \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.232622 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-run-httpd\") pod \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.232670 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-config-data\") pod \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.232691 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-scripts\") pod \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.232817 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-sg-core-conf-yaml\") pod \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.232845 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-combined-ca-bundle\") pod \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\" (UID: \"7fd50413-3521-4a4b-9063-9a8728b0a7aa\") " Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.233271 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7fd50413-3521-4a4b-9063-9a8728b0a7aa" (UID: "7fd50413-3521-4a4b-9063-9a8728b0a7aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.233379 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7fd50413-3521-4a4b-9063-9a8728b0a7aa" (UID: "7fd50413-3521-4a4b-9063-9a8728b0a7aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.233427 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.250201 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd50413-3521-4a4b-9063-9a8728b0a7aa-kube-api-access-zk7fp" (OuterVolumeSpecName: "kube-api-access-zk7fp") pod "7fd50413-3521-4a4b-9063-9a8728b0a7aa" (UID: "7fd50413-3521-4a4b-9063-9a8728b0a7aa"). InnerVolumeSpecName "kube-api-access-zk7fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.251053 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-scripts" (OuterVolumeSpecName: "scripts") pod "7fd50413-3521-4a4b-9063-9a8728b0a7aa" (UID: "7fd50413-3521-4a4b-9063-9a8728b0a7aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.268365 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7fd50413-3521-4a4b-9063-9a8728b0a7aa" (UID: "7fd50413-3521-4a4b-9063-9a8728b0a7aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276809 5029 generic.go:334] "Generic (PLEG): container finished" podID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerID="2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a" exitCode=0 Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276879 5029 generic.go:334] "Generic (PLEG): container finished" podID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerID="acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd" exitCode=2 Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276891 5029 generic.go:334] "Generic (PLEG): container finished" podID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerID="308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a" exitCode=0 Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276898 5029 generic.go:334] "Generic (PLEG): container finished" podID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerID="11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d" exitCode=0 Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276921 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerDied","Data":"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a"} Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276959 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerDied","Data":"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd"} Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276970 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerDied","Data":"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a"} Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276981 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerDied","Data":"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d"} Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.276990 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fd50413-3521-4a4b-9063-9a8728b0a7aa","Type":"ContainerDied","Data":"66961f0baff97608dd14cfb4248c53ef051eeb29db7d8e29a0422cc5fb924099"} Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.277007 5029 scope.go:117] "RemoveContainer" containerID="2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.277147 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.316157 5029 scope.go:117] "RemoveContainer" containerID="acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.336033 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.336095 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk7fp\" (UniqueName: \"kubernetes.io/projected/7fd50413-3521-4a4b-9063-9a8728b0a7aa-kube-api-access-zk7fp\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.336113 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fd50413-3521-4a4b-9063-9a8728b0a7aa-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.336125 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.347404 5029 scope.go:117] "RemoveContainer" containerID="308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.351632 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-config-data" (OuterVolumeSpecName: "config-data") pod "7fd50413-3521-4a4b-9063-9a8728b0a7aa" (UID: "7fd50413-3521-4a4b-9063-9a8728b0a7aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.369051 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fd50413-3521-4a4b-9063-9a8728b0a7aa" (UID: "7fd50413-3521-4a4b-9063-9a8728b0a7aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.376713 5029 scope.go:117] "RemoveContainer" containerID="11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.402925 5029 scope.go:117] "RemoveContainer" containerID="2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a" Mar 13 20:50:27 crc kubenswrapper[5029]: E0313 20:50:27.403494 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": container with ID starting with 2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a not found: ID does not exist" containerID="2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.403542 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a"} err="failed to get container status \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": rpc error: code = NotFound desc = could not find container \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": container with ID starting with 2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.403567 5029 scope.go:117] "RemoveContainer" containerID="acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd" Mar 13 20:50:27 crc kubenswrapper[5029]: E0313 20:50:27.403948 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": container with ID starting with acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd not found: ID does not exist" containerID="acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.403977 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd"} err="failed to get container status \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": rpc error: code = NotFound desc = could not find container \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": container with ID starting with acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.404014 5029 scope.go:117] "RemoveContainer" containerID="308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a" Mar 13 20:50:27 crc kubenswrapper[5029]: E0313 20:50:27.404293 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": container with ID starting with 308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a not found: ID does not exist" containerID="308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.404343 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a"} err="failed to get container status \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": rpc error: code = NotFound desc = could not find container \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": container with ID starting with 308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.404356 5029 scope.go:117] "RemoveContainer" containerID="11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d" Mar 13 20:50:27 crc kubenswrapper[5029]: E0313 20:50:27.404573 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": container with ID starting with 11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d not found: ID does not exist" containerID="11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.404593 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d"} err="failed to get container status \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": rpc error: code = NotFound desc = could not find container \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": container with ID starting with 11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.404623 5029 scope.go:117] "RemoveContainer" containerID="2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.404877 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a"} err="failed to get container status \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": rpc error: code = NotFound desc = could not find container \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": container with ID starting with 2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.404895 5029 scope.go:117] "RemoveContainer" containerID="acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.405123 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd"} err="failed to get container status \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": rpc error: code = NotFound desc = could not find container \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": container with ID starting with acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.405143 5029 scope.go:117] "RemoveContainer" containerID="308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.405373 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a"} err="failed to get container status \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": rpc error: code = NotFound desc = could not find container \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": container with ID starting with 308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.405391 5029 scope.go:117] "RemoveContainer" containerID="11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.405611 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d"} err="failed to get container status \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": rpc error: code = NotFound desc = could not find container \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": container with ID starting with 11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.405628 5029 scope.go:117] "RemoveContainer" containerID="2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.405874 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a"} err="failed to get container status \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": rpc error: code = NotFound desc = could not find container \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": container with ID starting with 2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.405891 5029 scope.go:117] "RemoveContainer" containerID="acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.406321 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd"} err="failed to get container status \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": rpc error: code = NotFound desc = could not find container \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": container with ID starting with acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.406341 5029 scope.go:117] "RemoveContainer" containerID="308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.406621 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a"} err="failed to get container status \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": rpc error: code = NotFound desc = could not find container \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": container with ID starting with 308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.406640 5029 scope.go:117] "RemoveContainer" containerID="11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.406914 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d"} err="failed to get container status \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": rpc error: code = NotFound desc = could not find container \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": container with ID starting with 11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.406933 5029 scope.go:117] "RemoveContainer" containerID="2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.407181 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a"} err="failed to get container status \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": rpc error: code = NotFound desc = could not find container \"2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a\": container with ID starting with 2d4438a5e098ae314e5025f89bdaf4a0765b41bb21844a8e89de5a5f4136385a not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.407198 5029 scope.go:117] "RemoveContainer" containerID="acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.407443 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd"} err="failed to get container status \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": rpc error: code = NotFound desc = could not find container \"acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd\": container with ID starting with acda2968e13015af63de994784f89d5e7dec8637ac0ae6d178ede3a851dd07cd not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.407463 5029 scope.go:117] "RemoveContainer" containerID="308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.407725 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a"} err="failed to get container status \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": rpc error: code = NotFound desc = could not find container \"308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a\": container with ID starting with 308fb56b561b79025c7245339847f14e2b45d737d96b1aef07f23fa6b5b8e66a not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.407768 5029 scope.go:117] "RemoveContainer" containerID="11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.408007 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d"} err="failed to get container status \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": rpc error: code = NotFound desc = could not find container \"11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d\": container with ID starting with 11754e28776fb63a08d61980248b6958e02da464261af30daa0200bc51ee707d not found: ID does not exist" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.439623 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.439718 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd50413-3521-4a4b-9063-9a8728b0a7aa-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.613696 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.624153 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.636548 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:27 crc kubenswrapper[5029]: E0313 20:50:27.639111 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="proxy-httpd" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.639172 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="proxy-httpd" Mar 13 20:50:27 crc kubenswrapper[5029]: E0313 20:50:27.639191 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="sg-core" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.639200 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="sg-core" Mar 13 20:50:27 crc kubenswrapper[5029]: E0313 20:50:27.639228 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="ceilometer-central-agent" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.639236 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="ceilometer-central-agent" Mar 13 20:50:27 crc kubenswrapper[5029]: E0313 20:50:27.639259 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="ceilometer-notification-agent" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.639268 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="ceilometer-notification-agent" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.639547 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="ceilometer-central-agent" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.639570 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="ceilometer-notification-agent" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.639580 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="sg-core" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.639593 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" containerName="proxy-httpd" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.641650 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.645596 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.645644 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.652606 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.744016 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-config-data\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.744069 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.744097 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-run-httpd\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.744224 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.744422 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvpz\" (UniqueName: \"kubernetes.io/projected/8302d1e1-054e-4db3-be98-987dbfa076c0-kube-api-access-4jvpz\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.744626 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-scripts\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.744693 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-log-httpd\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.847102 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jvpz\" (UniqueName: \"kubernetes.io/projected/8302d1e1-054e-4db3-be98-987dbfa076c0-kube-api-access-4jvpz\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.847243 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-scripts\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.847286 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-log-httpd\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.847335 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-config-data\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.847359 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.847381 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-run-httpd\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.847413 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.848972 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-log-httpd\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.849075 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-run-httpd\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.874156 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.874185 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.874196 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-scripts\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.874440 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-config-data\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.890843 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jvpz\" (UniqueName: \"kubernetes.io/projected/8302d1e1-054e-4db3-be98-987dbfa076c0-kube-api-access-4jvpz\") pod \"ceilometer-0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " pod="openstack/ceilometer-0" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.894771 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jpvh9"] Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.896377 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.911322 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jpvh9"] Mar 13 20:50:27 crc kubenswrapper[5029]: I0313 20:50:27.964354 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.057342 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2043c096-6123-44c7-90f8-b91a70523471-operator-scripts\") pod \"nova-api-db-create-jpvh9\" (UID: \"2043c096-6123-44c7-90f8-b91a70523471\") " pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.057415 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkkq4\" (UniqueName: \"kubernetes.io/projected/2043c096-6123-44c7-90f8-b91a70523471-kube-api-access-gkkq4\") pod \"nova-api-db-create-jpvh9\" (UID: \"2043c096-6123-44c7-90f8-b91a70523471\") " pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.096128 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1524-account-create-update-nlmgn"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.098051 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.101184 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.112999 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-g9qh5"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.118446 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.128600 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1524-account-create-update-nlmgn"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.140111 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-g9qh5"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.159210 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2043c096-6123-44c7-90f8-b91a70523471-operator-scripts\") pod \"nova-api-db-create-jpvh9\" (UID: \"2043c096-6123-44c7-90f8-b91a70523471\") " pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.159521 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkkq4\" (UniqueName: \"kubernetes.io/projected/2043c096-6123-44c7-90f8-b91a70523471-kube-api-access-gkkq4\") pod \"nova-api-db-create-jpvh9\" (UID: \"2043c096-6123-44c7-90f8-b91a70523471\") " pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.160389 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2043c096-6123-44c7-90f8-b91a70523471-operator-scripts\") pod \"nova-api-db-create-jpvh9\" (UID: \"2043c096-6123-44c7-90f8-b91a70523471\") " pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.201886 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkkq4\" (UniqueName: \"kubernetes.io/projected/2043c096-6123-44c7-90f8-b91a70523471-kube-api-access-gkkq4\") pod \"nova-api-db-create-jpvh9\" (UID: \"2043c096-6123-44c7-90f8-b91a70523471\") " pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.262436 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ghnt\" (UniqueName: \"kubernetes.io/projected/90e85f8e-05b4-4780-87f4-df861db34de7-kube-api-access-4ghnt\") pod \"nova-api-1524-account-create-update-nlmgn\" (UID: \"90e85f8e-05b4-4780-87f4-df861db34de7\") " pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.262517 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-operator-scripts\") pod \"nova-cell0-db-create-g9qh5\" (UID: \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\") " pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.262576 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e85f8e-05b4-4780-87f4-df861db34de7-operator-scripts\") pod \"nova-api-1524-account-create-update-nlmgn\" (UID: \"90e85f8e-05b4-4780-87f4-df861db34de7\") " pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.262882 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79s4v\" (UniqueName: \"kubernetes.io/projected/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-kube-api-access-79s4v\") pod \"nova-cell0-db-create-g9qh5\" (UID: \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\") " pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.275329 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.289445 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fffm7"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.291520 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.316608 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fffm7"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.330975 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8aef-account-create-update-kkxb4"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.332325 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.335651 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.339864 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8aef-account-create-update-kkxb4"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.365231 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ghnt\" (UniqueName: \"kubernetes.io/projected/90e85f8e-05b4-4780-87f4-df861db34de7-kube-api-access-4ghnt\") pod \"nova-api-1524-account-create-update-nlmgn\" (UID: \"90e85f8e-05b4-4780-87f4-df861db34de7\") " pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.365292 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-operator-scripts\") pod \"nova-cell0-db-create-g9qh5\" (UID: \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\") " pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.365334 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e85f8e-05b4-4780-87f4-df861db34de7-operator-scripts\") pod \"nova-api-1524-account-create-update-nlmgn\" (UID: \"90e85f8e-05b4-4780-87f4-df861db34de7\") " pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.365533 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79s4v\" (UniqueName: \"kubernetes.io/projected/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-kube-api-access-79s4v\") pod \"nova-cell0-db-create-g9qh5\" (UID: \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\") " pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.366681 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-operator-scripts\") pod \"nova-cell0-db-create-g9qh5\" (UID: \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\") " pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.367068 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e85f8e-05b4-4780-87f4-df861db34de7-operator-scripts\") pod \"nova-api-1524-account-create-update-nlmgn\" (UID: \"90e85f8e-05b4-4780-87f4-df861db34de7\") " pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.384822 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79s4v\" (UniqueName: \"kubernetes.io/projected/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-kube-api-access-79s4v\") pod \"nova-cell0-db-create-g9qh5\" (UID: \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\") " pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.385341 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ghnt\" (UniqueName: \"kubernetes.io/projected/90e85f8e-05b4-4780-87f4-df861db34de7-kube-api-access-4ghnt\") pod \"nova-api-1524-account-create-update-nlmgn\" (UID: \"90e85f8e-05b4-4780-87f4-df861db34de7\") " pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.458621 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.458659 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.468364 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdsr\" (UniqueName: \"kubernetes.io/projected/5af79719-de27-49b3-aa21-401419db6fc3-kube-api-access-sqdsr\") pod \"nova-cell0-8aef-account-create-update-kkxb4\" (UID: \"5af79719-de27-49b3-aa21-401419db6fc3\") " pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.468454 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5af79719-de27-49b3-aa21-401419db6fc3-operator-scripts\") pod \"nova-cell0-8aef-account-create-update-kkxb4\" (UID: \"5af79719-de27-49b3-aa21-401419db6fc3\") " pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.468529 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bjrf\" (UniqueName: \"kubernetes.io/projected/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-kube-api-access-2bjrf\") pod \"nova-cell1-db-create-fffm7\" (UID: \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\") " pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.468712 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-operator-scripts\") pod \"nova-cell1-db-create-fffm7\" (UID: \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\") " pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.471519 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.482680 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.486980 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-da29-account-create-update-4x25g"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.488389 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.490722 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.507952 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-da29-account-create-update-4x25g"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.534938 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.573139 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.575659 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5af79719-de27-49b3-aa21-401419db6fc3-operator-scripts\") pod \"nova-cell0-8aef-account-create-update-kkxb4\" (UID: \"5af79719-de27-49b3-aa21-401419db6fc3\") " pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.578497 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bjrf\" (UniqueName: \"kubernetes.io/projected/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-kube-api-access-2bjrf\") pod \"nova-cell1-db-create-fffm7\" (UID: \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\") " pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.580604 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5af79719-de27-49b3-aa21-401419db6fc3-operator-scripts\") pod \"nova-cell0-8aef-account-create-update-kkxb4\" (UID: \"5af79719-de27-49b3-aa21-401419db6fc3\") " pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.581048 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-operator-scripts\") pod \"nova-cell1-db-create-fffm7\" (UID: \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\") " pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.583275 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqdsr\" (UniqueName: \"kubernetes.io/projected/5af79719-de27-49b3-aa21-401419db6fc3-kube-api-access-sqdsr\") pod \"nova-cell0-8aef-account-create-update-kkxb4\" (UID: \"5af79719-de27-49b3-aa21-401419db6fc3\") " pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.586334 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-operator-scripts\") pod \"nova-cell1-db-create-fffm7\" (UID: \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\") " pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:28 crc kubenswrapper[5029]: W0313 20:50:28.600309 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8302d1e1_054e_4db3_be98_987dbfa076c0.slice/crio-0ed6418730620829f97783dc56fefb321613bd2f7a7268a48c58a90ccb846c3d WatchSource:0}: Error finding container 0ed6418730620829f97783dc56fefb321613bd2f7a7268a48c58a90ccb846c3d: Status 404 returned error can't find the container with id 0ed6418730620829f97783dc56fefb321613bd2f7a7268a48c58a90ccb846c3d Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.607519 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bjrf\" (UniqueName: \"kubernetes.io/projected/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-kube-api-access-2bjrf\") pod \"nova-cell1-db-create-fffm7\" (UID: \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\") " pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.614159 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqdsr\" (UniqueName: \"kubernetes.io/projected/5af79719-de27-49b3-aa21-401419db6fc3-kube-api-access-sqdsr\") pod \"nova-cell0-8aef-account-create-update-kkxb4\" (UID: \"5af79719-de27-49b3-aa21-401419db6fc3\") " pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.619693 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.660921 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd50413-3521-4a4b-9063-9a8728b0a7aa" path="/var/lib/kubelet/pods/7fd50413-3521-4a4b-9063-9a8728b0a7aa/volumes" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.661728 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.662132 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.686711 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12bb1159-3ba4-45dd-8bc3-26382b17baf5-operator-scripts\") pod \"nova-cell1-da29-account-create-update-4x25g\" (UID: \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\") " pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.687189 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9bs7\" (UniqueName: \"kubernetes.io/projected/12bb1159-3ba4-45dd-8bc3-26382b17baf5-kube-api-access-n9bs7\") pod \"nova-cell1-da29-account-create-update-4x25g\" (UID: \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\") " pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.700643 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.791518 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9bs7\" (UniqueName: \"kubernetes.io/projected/12bb1159-3ba4-45dd-8bc3-26382b17baf5-kube-api-access-n9bs7\") pod \"nova-cell1-da29-account-create-update-4x25g\" (UID: \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\") " pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.791952 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12bb1159-3ba4-45dd-8bc3-26382b17baf5-operator-scripts\") pod \"nova-cell1-da29-account-create-update-4x25g\" (UID: \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\") " pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.792895 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12bb1159-3ba4-45dd-8bc3-26382b17baf5-operator-scripts\") pod \"nova-cell1-da29-account-create-update-4x25g\" (UID: \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\") " pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.815823 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9bs7\" (UniqueName: \"kubernetes.io/projected/12bb1159-3ba4-45dd-8bc3-26382b17baf5-kube-api-access-n9bs7\") pod \"nova-cell1-da29-account-create-update-4x25g\" (UID: \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\") " pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.828061 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:28 crc kubenswrapper[5029]: I0313 20:50:28.866581 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jpvh9"] Mar 13 20:50:28 crc kubenswrapper[5029]: W0313 20:50:28.899707 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2043c096_6123_44c7_90f8_b91a70523471.slice/crio-f22674322bb9a9d16f042418b420d7624e5e3b560b260b8503e1b03320c4e5f6 WatchSource:0}: Error finding container f22674322bb9a9d16f042418b420d7624e5e3b560b260b8503e1b03320c4e5f6: Status 404 returned error can't find the container with id f22674322bb9a9d16f042418b420d7624e5e3b560b260b8503e1b03320c4e5f6 Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.160540 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1524-account-create-update-nlmgn"] Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.331178 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fffm7"] Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.339770 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-g9qh5"] Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.364461 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1524-account-create-update-nlmgn" event={"ID":"90e85f8e-05b4-4780-87f4-df861db34de7","Type":"ContainerStarted","Data":"3045ffb91b36fcda46d22e3c6abb3b8aba9f435d1ac6c74e575cf9506289624f"} Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.367611 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerStarted","Data":"0ed6418730620829f97783dc56fefb321613bd2f7a7268a48c58a90ccb846c3d"} Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.370319 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jpvh9" event={"ID":"2043c096-6123-44c7-90f8-b91a70523471","Type":"ContainerStarted","Data":"f22674322bb9a9d16f042418b420d7624e5e3b560b260b8503e1b03320c4e5f6"} Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.370762 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.370788 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.542980 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8aef-account-create-update-kkxb4"] Mar 13 20:50:29 crc kubenswrapper[5029]: W0313 20:50:29.553563 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af79719_de27_49b3_aa21_401419db6fc3.slice/crio-e83e66b3c4843c6eb71f21302396e4c4698d8e40601d2e165b8d32980263f44f WatchSource:0}: Error finding container e83e66b3c4843c6eb71f21302396e4c4698d8e40601d2e165b8d32980263f44f: Status 404 returned error can't find the container with id e83e66b3c4843c6eb71f21302396e4c4698d8e40601d2e165b8d32980263f44f Mar 13 20:50:29 crc kubenswrapper[5029]: I0313 20:50:29.733604 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-da29-account-create-update-4x25g"] Mar 13 20:50:29 crc kubenswrapper[5029]: W0313 20:50:29.757582 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12bb1159_3ba4_45dd_8bc3_26382b17baf5.slice/crio-30c18a31850d62fd162b15fe2d72170587cd083e0384115f2da879478d3f7019 WatchSource:0}: Error finding container 30c18a31850d62fd162b15fe2d72170587cd083e0384115f2da879478d3f7019: Status 404 returned error can't find the container with id 30c18a31850d62fd162b15fe2d72170587cd083e0384115f2da879478d3f7019 Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.384022 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerStarted","Data":"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.388428 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fffm7" event={"ID":"f98b2e2a-db84-4220-ad1a-5e0e8a867b68","Type":"ContainerDied","Data":"aecdc340dff1a05a4b04259d24da344591b88d8816c641d63d8f444c918f2d5d"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.388286 5029 generic.go:334] "Generic (PLEG): container finished" podID="f98b2e2a-db84-4220-ad1a-5e0e8a867b68" containerID="aecdc340dff1a05a4b04259d24da344591b88d8816c641d63d8f444c918f2d5d" exitCode=0 Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.388968 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fffm7" event={"ID":"f98b2e2a-db84-4220-ad1a-5e0e8a867b68","Type":"ContainerStarted","Data":"9d0617451afa7833efc52ec26c7ba552af90e4e0adcb0ea40be3d9b6471f25eb"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.394279 5029 generic.go:334] "Generic (PLEG): container finished" podID="5af79719-de27-49b3-aa21-401419db6fc3" containerID="21e79b1a579aa9f29541504aa14c4d90fc0b033f370b63f35fc7cbd54b9da387" exitCode=0 Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.394324 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" event={"ID":"5af79719-de27-49b3-aa21-401419db6fc3","Type":"ContainerDied","Data":"21e79b1a579aa9f29541504aa14c4d90fc0b033f370b63f35fc7cbd54b9da387"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.394364 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" event={"ID":"5af79719-de27-49b3-aa21-401419db6fc3","Type":"ContainerStarted","Data":"e83e66b3c4843c6eb71f21302396e4c4698d8e40601d2e165b8d32980263f44f"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.396412 5029 generic.go:334] "Generic (PLEG): container finished" podID="c90c73b6-45b6-4a3e-bd24-4bb1873b73cd" containerID="859ad1074b9cde91a7e3200a3bdbcebfda615090bdf40dc598153476471c4232" exitCode=0 Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.396493 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-g9qh5" event={"ID":"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd","Type":"ContainerDied","Data":"859ad1074b9cde91a7e3200a3bdbcebfda615090bdf40dc598153476471c4232"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.396519 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-g9qh5" event={"ID":"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd","Type":"ContainerStarted","Data":"656ab54d355916a748f09859bd21b150d459fceebc68a9122f296b0353c27454"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.398266 5029 generic.go:334] "Generic (PLEG): container finished" podID="12bb1159-3ba4-45dd-8bc3-26382b17baf5" containerID="f4bce576fd371bc94e26b2ec106bf9d433ffe799345f5e05a5f619aaa12aabdd" exitCode=0 Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.398333 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da29-account-create-update-4x25g" event={"ID":"12bb1159-3ba4-45dd-8bc3-26382b17baf5","Type":"ContainerDied","Data":"f4bce576fd371bc94e26b2ec106bf9d433ffe799345f5e05a5f619aaa12aabdd"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.398364 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da29-account-create-update-4x25g" event={"ID":"12bb1159-3ba4-45dd-8bc3-26382b17baf5","Type":"ContainerStarted","Data":"30c18a31850d62fd162b15fe2d72170587cd083e0384115f2da879478d3f7019"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.401647 5029 generic.go:334] "Generic (PLEG): container finished" podID="2043c096-6123-44c7-90f8-b91a70523471" containerID="9ffc4c109d12644f0be852f0c2118230b7eca8481e488e657056375dd3c6afc2" exitCode=0 Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.401734 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jpvh9" event={"ID":"2043c096-6123-44c7-90f8-b91a70523471","Type":"ContainerDied","Data":"9ffc4c109d12644f0be852f0c2118230b7eca8481e488e657056375dd3c6afc2"} Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.406896 5029 generic.go:334] "Generic (PLEG): container finished" podID="90e85f8e-05b4-4780-87f4-df861db34de7" containerID="1eddce6f03cd99f2710df6b86936af86529b10595cc42b1e643a92fe58302af9" exitCode=0 Mar 13 20:50:30 crc kubenswrapper[5029]: I0313 20:50:30.409023 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1524-account-create-update-nlmgn" event={"ID":"90e85f8e-05b4-4780-87f4-df861db34de7","Type":"ContainerDied","Data":"1eddce6f03cd99f2710df6b86936af86529b10595cc42b1e643a92fe58302af9"} Mar 13 20:50:31 crc kubenswrapper[5029]: I0313 20:50:31.104053 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d1917286-7b0a-46c8-a296-fab758373bc5" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.171:3000/\": dial tcp 10.217.0.171:3000: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 20:50:31 crc kubenswrapper[5029]: I0313 20:50:31.387684 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Mar 13 20:50:31 crc kubenswrapper[5029]: I0313 20:50:31.420371 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerStarted","Data":"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e"} Mar 13 20:50:31 crc kubenswrapper[5029]: I0313 20:50:31.420427 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerStarted","Data":"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3"} Mar 13 20:50:31 crc kubenswrapper[5029]: I0313 20:50:31.956432 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.065137 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bjrf\" (UniqueName: \"kubernetes.io/projected/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-kube-api-access-2bjrf\") pod \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\" (UID: \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.065380 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-operator-scripts\") pod \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\" (UID: \"f98b2e2a-db84-4220-ad1a-5e0e8a867b68\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.070088 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f98b2e2a-db84-4220-ad1a-5e0e8a867b68" (UID: "f98b2e2a-db84-4220-ad1a-5e0e8a867b68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.079226 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-kube-api-access-2bjrf" (OuterVolumeSpecName: "kube-api-access-2bjrf") pod "f98b2e2a-db84-4220-ad1a-5e0e8a867b68" (UID: "f98b2e2a-db84-4220-ad1a-5e0e8a867b68"). InnerVolumeSpecName "kube-api-access-2bjrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.095963 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.107578 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.172078 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9bs7\" (UniqueName: \"kubernetes.io/projected/12bb1159-3ba4-45dd-8bc3-26382b17baf5-kube-api-access-n9bs7\") pod \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\" (UID: \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.172178 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e85f8e-05b4-4780-87f4-df861db34de7-operator-scripts\") pod \"90e85f8e-05b4-4780-87f4-df861db34de7\" (UID: \"90e85f8e-05b4-4780-87f4-df861db34de7\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.172229 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ghnt\" (UniqueName: \"kubernetes.io/projected/90e85f8e-05b4-4780-87f4-df861db34de7-kube-api-access-4ghnt\") pod \"90e85f8e-05b4-4780-87f4-df861db34de7\" (UID: \"90e85f8e-05b4-4780-87f4-df861db34de7\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.172251 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12bb1159-3ba4-45dd-8bc3-26382b17baf5-operator-scripts\") pod \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\" (UID: \"12bb1159-3ba4-45dd-8bc3-26382b17baf5\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.173141 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bjrf\" (UniqueName: \"kubernetes.io/projected/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-kube-api-access-2bjrf\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.173167 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f98b2e2a-db84-4220-ad1a-5e0e8a867b68-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.175581 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12bb1159-3ba4-45dd-8bc3-26382b17baf5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12bb1159-3ba4-45dd-8bc3-26382b17baf5" (UID: "12bb1159-3ba4-45dd-8bc3-26382b17baf5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.176418 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12bb1159-3ba4-45dd-8bc3-26382b17baf5-kube-api-access-n9bs7" (OuterVolumeSpecName: "kube-api-access-n9bs7") pod "12bb1159-3ba4-45dd-8bc3-26382b17baf5" (UID: "12bb1159-3ba4-45dd-8bc3-26382b17baf5"). InnerVolumeSpecName "kube-api-access-n9bs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.177730 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e85f8e-05b4-4780-87f4-df861db34de7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90e85f8e-05b4-4780-87f4-df861db34de7" (UID: "90e85f8e-05b4-4780-87f4-df861db34de7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.183110 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e85f8e-05b4-4780-87f4-df861db34de7-kube-api-access-4ghnt" (OuterVolumeSpecName: "kube-api-access-4ghnt") pod "90e85f8e-05b4-4780-87f4-df861db34de7" (UID: "90e85f8e-05b4-4780-87f4-df861db34de7"). InnerVolumeSpecName "kube-api-access-4ghnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.258696 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.265671 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.273166 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.273937 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2043c096-6123-44c7-90f8-b91a70523471-operator-scripts\") pod \"2043c096-6123-44c7-90f8-b91a70523471\" (UID: \"2043c096-6123-44c7-90f8-b91a70523471\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.274309 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkkq4\" (UniqueName: \"kubernetes.io/projected/2043c096-6123-44c7-90f8-b91a70523471-kube-api-access-gkkq4\") pod \"2043c096-6123-44c7-90f8-b91a70523471\" (UID: \"2043c096-6123-44c7-90f8-b91a70523471\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.274798 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ghnt\" (UniqueName: \"kubernetes.io/projected/90e85f8e-05b4-4780-87f4-df861db34de7-kube-api-access-4ghnt\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.274816 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12bb1159-3ba4-45dd-8bc3-26382b17baf5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.274826 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9bs7\" (UniqueName: \"kubernetes.io/projected/12bb1159-3ba4-45dd-8bc3-26382b17baf5-kube-api-access-n9bs7\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.274836 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e85f8e-05b4-4780-87f4-df861db34de7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.275599 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2043c096-6123-44c7-90f8-b91a70523471-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2043c096-6123-44c7-90f8-b91a70523471" (UID: "2043c096-6123-44c7-90f8-b91a70523471"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.278503 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2043c096-6123-44c7-90f8-b91a70523471-kube-api-access-gkkq4" (OuterVolumeSpecName: "kube-api-access-gkkq4") pod "2043c096-6123-44c7-90f8-b91a70523471" (UID: "2043c096-6123-44c7-90f8-b91a70523471"). InnerVolumeSpecName "kube-api-access-gkkq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.375733 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqdsr\" (UniqueName: \"kubernetes.io/projected/5af79719-de27-49b3-aa21-401419db6fc3-kube-api-access-sqdsr\") pod \"5af79719-de27-49b3-aa21-401419db6fc3\" (UID: \"5af79719-de27-49b3-aa21-401419db6fc3\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.375782 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-operator-scripts\") pod \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\" (UID: \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.375962 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5af79719-de27-49b3-aa21-401419db6fc3-operator-scripts\") pod \"5af79719-de27-49b3-aa21-401419db6fc3\" (UID: \"5af79719-de27-49b3-aa21-401419db6fc3\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.376150 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79s4v\" (UniqueName: \"kubernetes.io/projected/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-kube-api-access-79s4v\") pod \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\" (UID: \"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd\") " Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.376492 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c90c73b6-45b6-4a3e-bd24-4bb1873b73cd" (UID: "c90c73b6-45b6-4a3e-bd24-4bb1873b73cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.376492 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af79719-de27-49b3-aa21-401419db6fc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5af79719-de27-49b3-aa21-401419db6fc3" (UID: "5af79719-de27-49b3-aa21-401419db6fc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.376621 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkkq4\" (UniqueName: \"kubernetes.io/projected/2043c096-6123-44c7-90f8-b91a70523471-kube-api-access-gkkq4\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.376647 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2043c096-6123-44c7-90f8-b91a70523471-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.379971 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-kube-api-access-79s4v" (OuterVolumeSpecName: "kube-api-access-79s4v") pod "c90c73b6-45b6-4a3e-bd24-4bb1873b73cd" (UID: "c90c73b6-45b6-4a3e-bd24-4bb1873b73cd"). InnerVolumeSpecName "kube-api-access-79s4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.380431 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af79719-de27-49b3-aa21-401419db6fc3-kube-api-access-sqdsr" (OuterVolumeSpecName: "kube-api-access-sqdsr") pod "5af79719-de27-49b3-aa21-401419db6fc3" (UID: "5af79719-de27-49b3-aa21-401419db6fc3"). InnerVolumeSpecName "kube-api-access-sqdsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.435144 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da29-account-create-update-4x25g" event={"ID":"12bb1159-3ba4-45dd-8bc3-26382b17baf5","Type":"ContainerDied","Data":"30c18a31850d62fd162b15fe2d72170587cd083e0384115f2da879478d3f7019"} Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.435215 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30c18a31850d62fd162b15fe2d72170587cd083e0384115f2da879478d3f7019" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.435166 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da29-account-create-update-4x25g" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.437104 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jpvh9" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.437104 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jpvh9" event={"ID":"2043c096-6123-44c7-90f8-b91a70523471","Type":"ContainerDied","Data":"f22674322bb9a9d16f042418b420d7624e5e3b560b260b8503e1b03320c4e5f6"} Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.437153 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f22674322bb9a9d16f042418b420d7624e5e3b560b260b8503e1b03320c4e5f6" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.439006 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1524-account-create-update-nlmgn" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.439074 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1524-account-create-update-nlmgn" event={"ID":"90e85f8e-05b4-4780-87f4-df861db34de7","Type":"ContainerDied","Data":"3045ffb91b36fcda46d22e3c6abb3b8aba9f435d1ac6c74e575cf9506289624f"} Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.439122 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3045ffb91b36fcda46d22e3c6abb3b8aba9f435d1ac6c74e575cf9506289624f" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.441172 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fffm7" event={"ID":"f98b2e2a-db84-4220-ad1a-5e0e8a867b68","Type":"ContainerDied","Data":"9d0617451afa7833efc52ec26c7ba552af90e4e0adcb0ea40be3d9b6471f25eb"} Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.441210 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d0617451afa7833efc52ec26c7ba552af90e4e0adcb0ea40be3d9b6471f25eb" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.441339 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fffm7" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.450006 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" event={"ID":"5af79719-de27-49b3-aa21-401419db6fc3","Type":"ContainerDied","Data":"e83e66b3c4843c6eb71f21302396e4c4698d8e40601d2e165b8d32980263f44f"} Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.450052 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e83e66b3c4843c6eb71f21302396e4c4698d8e40601d2e165b8d32980263f44f" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.450081 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8aef-account-create-update-kkxb4" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.453941 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-g9qh5" event={"ID":"c90c73b6-45b6-4a3e-bd24-4bb1873b73cd","Type":"ContainerDied","Data":"656ab54d355916a748f09859bd21b150d459fceebc68a9122f296b0353c27454"} Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.453980 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="656ab54d355916a748f09859bd21b150d459fceebc68a9122f296b0353c27454" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.454043 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-g9qh5" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.481566 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79s4v\" (UniqueName: \"kubernetes.io/projected/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-kube-api-access-79s4v\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.481604 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqdsr\" (UniqueName: \"kubernetes.io/projected/5af79719-de27-49b3-aa21-401419db6fc3-kube-api-access-sqdsr\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.481630 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.481645 5029 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5af79719-de27-49b3-aa21-401419db6fc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.527526 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.527663 5029 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.529219 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.719351 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.719705 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.762163 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:32 crc kubenswrapper[5029]: I0313 20:50:32.770184 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:33 crc kubenswrapper[5029]: I0313 20:50:33.489461 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:33 crc kubenswrapper[5029]: I0313 20:50:33.489811 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:34 crc kubenswrapper[5029]: I0313 20:50:34.501003 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerStarted","Data":"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc"} Mar 13 20:50:34 crc kubenswrapper[5029]: I0313 20:50:34.501377 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="ceilometer-central-agent" containerID="cri-o://88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30" gracePeriod=30 Mar 13 20:50:34 crc kubenswrapper[5029]: I0313 20:50:34.501400 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="proxy-httpd" containerID="cri-o://b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc" gracePeriod=30 Mar 13 20:50:34 crc kubenswrapper[5029]: I0313 20:50:34.501410 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="sg-core" containerID="cri-o://4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e" gracePeriod=30 Mar 13 20:50:34 crc kubenswrapper[5029]: I0313 20:50:34.501478 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="ceilometer-notification-agent" containerID="cri-o://51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3" gracePeriod=30 Mar 13 20:50:34 crc kubenswrapper[5029]: I0313 20:50:34.539495 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.911404344 podStartE2EDuration="7.539472333s" podCreationTimestamp="2026-03-13 20:50:27 +0000 UTC" firstStartedPulling="2026-03-13 20:50:28.627371399 +0000 UTC m=+1388.643453802" lastFinishedPulling="2026-03-13 20:50:33.255439378 +0000 UTC m=+1393.271521791" observedRunningTime="2026-03-13 20:50:34.534797396 +0000 UTC m=+1394.550879819" watchObservedRunningTime="2026-03-13 20:50:34.539472333 +0000 UTC m=+1394.555554736" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.320104 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.450556 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jvpz\" (UniqueName: \"kubernetes.io/projected/8302d1e1-054e-4db3-be98-987dbfa076c0-kube-api-access-4jvpz\") pod \"8302d1e1-054e-4db3-be98-987dbfa076c0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.450644 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-scripts\") pod \"8302d1e1-054e-4db3-be98-987dbfa076c0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.450704 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-sg-core-conf-yaml\") pod \"8302d1e1-054e-4db3-be98-987dbfa076c0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.450757 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-log-httpd\") pod \"8302d1e1-054e-4db3-be98-987dbfa076c0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.450891 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-run-httpd\") pod \"8302d1e1-054e-4db3-be98-987dbfa076c0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.450968 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-combined-ca-bundle\") pod \"8302d1e1-054e-4db3-be98-987dbfa076c0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.451040 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-config-data\") pod \"8302d1e1-054e-4db3-be98-987dbfa076c0\" (UID: \"8302d1e1-054e-4db3-be98-987dbfa076c0\") " Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.451790 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8302d1e1-054e-4db3-be98-987dbfa076c0" (UID: "8302d1e1-054e-4db3-be98-987dbfa076c0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.452445 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8302d1e1-054e-4db3-be98-987dbfa076c0" (UID: "8302d1e1-054e-4db3-be98-987dbfa076c0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.452784 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.452805 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8302d1e1-054e-4db3-be98-987dbfa076c0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.459598 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8302d1e1-054e-4db3-be98-987dbfa076c0-kube-api-access-4jvpz" (OuterVolumeSpecName: "kube-api-access-4jvpz") pod "8302d1e1-054e-4db3-be98-987dbfa076c0" (UID: "8302d1e1-054e-4db3-be98-987dbfa076c0"). InnerVolumeSpecName "kube-api-access-4jvpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.459641 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-scripts" (OuterVolumeSpecName: "scripts") pod "8302d1e1-054e-4db3-be98-987dbfa076c0" (UID: "8302d1e1-054e-4db3-be98-987dbfa076c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.490585 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8302d1e1-054e-4db3-be98-987dbfa076c0" (UID: "8302d1e1-054e-4db3-be98-987dbfa076c0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.520923 5029 generic.go:334] "Generic (PLEG): container finished" podID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerID="b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc" exitCode=0 Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.522768 5029 generic.go:334] "Generic (PLEG): container finished" podID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerID="4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e" exitCode=2 Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.522896 5029 generic.go:334] "Generic (PLEG): container finished" podID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerID="51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3" exitCode=0 Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.522987 5029 generic.go:334] "Generic (PLEG): container finished" podID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerID="88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30" exitCode=0 Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.521280 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.521324 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerDied","Data":"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc"} Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.523221 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerDied","Data":"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e"} Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.523263 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerDied","Data":"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3"} Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.523281 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerDied","Data":"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30"} Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.523295 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8302d1e1-054e-4db3-be98-987dbfa076c0","Type":"ContainerDied","Data":"0ed6418730620829f97783dc56fefb321613bd2f7a7268a48c58a90ccb846c3d"} Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.523305 5029 scope.go:117] "RemoveContainer" containerID="b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.523682 5029 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.523772 5029 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.545893 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8302d1e1-054e-4db3-be98-987dbfa076c0" (UID: "8302d1e1-054e-4db3-be98-987dbfa076c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.557231 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.557274 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.557290 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jvpz\" (UniqueName: \"kubernetes.io/projected/8302d1e1-054e-4db3-be98-987dbfa076c0-kube-api-access-4jvpz\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.557305 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.569895 5029 scope.go:117] "RemoveContainer" containerID="4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.586408 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-config-data" (OuterVolumeSpecName: "config-data") pod "8302d1e1-054e-4db3-be98-987dbfa076c0" (UID: "8302d1e1-054e-4db3-be98-987dbfa076c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.619828 5029 scope.go:117] "RemoveContainer" containerID="51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.659722 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8302d1e1-054e-4db3-be98-987dbfa076c0-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.709289 5029 scope.go:117] "RemoveContainer" containerID="88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.733674 5029 scope.go:117] "RemoveContainer" containerID="b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.734653 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": container with ID starting with b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc not found: ID does not exist" containerID="b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.734702 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc"} err="failed to get container status \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": rpc error: code = NotFound desc = could not find container \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": container with ID starting with b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.734730 5029 scope.go:117] "RemoveContainer" containerID="4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.735279 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": container with ID starting with 4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e not found: ID does not exist" containerID="4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.735310 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e"} err="failed to get container status \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": rpc error: code = NotFound desc = could not find container \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": container with ID starting with 4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.735331 5029 scope.go:117] "RemoveContainer" containerID="51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.735600 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": container with ID starting with 51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3 not found: ID does not exist" containerID="51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.735629 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3"} err="failed to get container status \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": rpc error: code = NotFound desc = could not find container \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": container with ID starting with 51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3 not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.735648 5029 scope.go:117] "RemoveContainer" containerID="88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.736060 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": container with ID starting with 88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30 not found: ID does not exist" containerID="88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.736241 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30"} err="failed to get container status \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": rpc error: code = NotFound desc = could not find container \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": container with ID starting with 88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30 not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.736410 5029 scope.go:117] "RemoveContainer" containerID="b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.737028 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc"} err="failed to get container status \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": rpc error: code = NotFound desc = could not find container \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": container with ID starting with b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.737102 5029 scope.go:117] "RemoveContainer" containerID="4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.737509 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e"} err="failed to get container status \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": rpc error: code = NotFound desc = could not find container \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": container with ID starting with 4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.737600 5029 scope.go:117] "RemoveContainer" containerID="51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.738184 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3"} err="failed to get container status \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": rpc error: code = NotFound desc = could not find container \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": container with ID starting with 51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3 not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.738237 5029 scope.go:117] "RemoveContainer" containerID="88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.738629 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30"} err="failed to get container status \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": rpc error: code = NotFound desc = could not find container \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": container with ID starting with 88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30 not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.738676 5029 scope.go:117] "RemoveContainer" containerID="b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.739022 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc"} err="failed to get container status \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": rpc error: code = NotFound desc = could not find container \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": container with ID starting with b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.739060 5029 scope.go:117] "RemoveContainer" containerID="4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.739600 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e"} err="failed to get container status \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": rpc error: code = NotFound desc = could not find container \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": container with ID starting with 4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.739632 5029 scope.go:117] "RemoveContainer" containerID="51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.739992 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3"} err="failed to get container status \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": rpc error: code = NotFound desc = could not find container \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": container with ID starting with 51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3 not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.740101 5029 scope.go:117] "RemoveContainer" containerID="88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.740550 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30"} err="failed to get container status \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": rpc error: code = NotFound desc = could not find container \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": container with ID starting with 88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30 not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.740589 5029 scope.go:117] "RemoveContainer" containerID="b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.740971 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc"} err="failed to get container status \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": rpc error: code = NotFound desc = could not find container \"b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc\": container with ID starting with b59c0bc80c7d238a4f7547f623a6414891e50ab68ee4c51e13568cb0ba8fe8dc not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.741003 5029 scope.go:117] "RemoveContainer" containerID="4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.741347 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e"} err="failed to get container status \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": rpc error: code = NotFound desc = could not find container \"4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e\": container with ID starting with 4ddc64e31e524ffd6d615cb64a03a3a4d0aebb54cd7027759d8703ada7fd582e not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.741468 5029 scope.go:117] "RemoveContainer" containerID="51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.741933 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3"} err="failed to get container status \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": rpc error: code = NotFound desc = could not find container \"51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3\": container with ID starting with 51543379866ef129b72cc0d06233b6147ce1eda784d579f1b4e76c216770afb3 not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.741960 5029 scope.go:117] "RemoveContainer" containerID="88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.742226 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30"} err="failed to get container status \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": rpc error: code = NotFound desc = could not find container \"88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30\": container with ID starting with 88e11e33fbe0d42cd369b5263cd67cadff1debb598a396bba1ba39a4707f0b30 not found: ID does not exist" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.778607 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.782615 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.921042 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.931187 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949090 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949501 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2043c096-6123-44c7-90f8-b91a70523471" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949516 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2043c096-6123-44c7-90f8-b91a70523471" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949529 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="proxy-httpd" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949536 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="proxy-httpd" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949547 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e85f8e-05b4-4780-87f4-df861db34de7" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949554 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e85f8e-05b4-4780-87f4-df861db34de7" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949562 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af79719-de27-49b3-aa21-401419db6fc3" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949568 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af79719-de27-49b3-aa21-401419db6fc3" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949577 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="ceilometer-notification-agent" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949582 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="ceilometer-notification-agent" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949597 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="sg-core" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949603 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="sg-core" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949614 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90c73b6-45b6-4a3e-bd24-4bb1873b73cd" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949620 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90c73b6-45b6-4a3e-bd24-4bb1873b73cd" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949643 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12bb1159-3ba4-45dd-8bc3-26382b17baf5" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949649 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="12bb1159-3ba4-45dd-8bc3-26382b17baf5" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949659 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98b2e2a-db84-4220-ad1a-5e0e8a867b68" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949665 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98b2e2a-db84-4220-ad1a-5e0e8a867b68" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: E0313 20:50:35.949684 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="ceilometer-central-agent" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949690 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="ceilometer-central-agent" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949865 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="proxy-httpd" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949884 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="ceilometer-notification-agent" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949894 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c90c73b6-45b6-4a3e-bd24-4bb1873b73cd" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949905 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e85f8e-05b4-4780-87f4-df861db34de7" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949918 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="ceilometer-central-agent" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949935 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98b2e2a-db84-4220-ad1a-5e0e8a867b68" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949945 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="2043c096-6123-44c7-90f8-b91a70523471" containerName="mariadb-database-create" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949954 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af79719-de27-49b3-aa21-401419db6fc3" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949966 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" containerName="sg-core" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.949976 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="12bb1159-3ba4-45dd-8bc3-26382b17baf5" containerName="mariadb-account-create-update" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.951542 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.955059 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.955758 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:50:35 crc kubenswrapper[5029]: I0313 20:50:35.980718 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.068920 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.069337 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-scripts\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.069374 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-config-data\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.069416 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-run-httpd\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.069471 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6kh\" (UniqueName: \"kubernetes.io/projected/78eeb560-069f-4773-b6be-77e8d34acd2f-kube-api-access-5w6kh\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.069508 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.069538 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-log-httpd\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.171999 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-run-httpd\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.172096 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6kh\" (UniqueName: \"kubernetes.io/projected/78eeb560-069f-4773-b6be-77e8d34acd2f-kube-api-access-5w6kh\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.172153 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.172180 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-log-httpd\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.172228 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.172307 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-scripts\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.172332 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-config-data\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.174449 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-run-httpd\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.174536 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-log-httpd\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.178576 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-config-data\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.182807 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.187470 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.190143 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-scripts\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.192212 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6kh\" (UniqueName: \"kubernetes.io/projected/78eeb560-069f-4773-b6be-77e8d34acd2f-kube-api-access-5w6kh\") pod \"ceilometer-0\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.283023 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.621629 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8302d1e1-054e-4db3-be98-987dbfa076c0" path="/var/lib/kubelet/pods/8302d1e1-054e-4db3-be98-987dbfa076c0/volumes" Mar 13 20:50:36 crc kubenswrapper[5029]: I0313 20:50:36.774723 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:37 crc kubenswrapper[5029]: I0313 20:50:37.549908 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerStarted","Data":"2b4e522a52d4ba332624df021f9466d706d32c2eae0350cda102e4d0c366dd8a"} Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.321091 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.579135 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerStarted","Data":"eca50e74e990c5d503eb8590b5c3475be991f46735e404f2b50ca967d006e068"} Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.579450 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerStarted","Data":"e1e26a5f92aa9364b4593c1d404f1fdf0c3e5751affc704becbe5569ed535c87"} Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.690951 5029 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podda8a5250-75de-4986-ab96-2415b667cac1"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podda8a5250-75de-4986-ab96-2415b667cac1] : Timed out while waiting for systemd to remove kubepods-besteffort-podda8a5250_75de_4986_ab96_2415b667cac1.slice" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.724610 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mwzf9"] Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.726649 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.728873 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.730119 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.733818 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pqbfx" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.745015 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mwzf9"] Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.839875 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85jt\" (UniqueName: \"kubernetes.io/projected/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-kube-api-access-b85jt\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.839924 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-scripts\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.839958 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-config-data\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.840014 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.941906 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85jt\" (UniqueName: \"kubernetes.io/projected/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-kube-api-access-b85jt\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.942034 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-scripts\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.943195 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-config-data\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.943295 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.947605 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-scripts\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.948116 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-config-data\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.948685 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:38 crc kubenswrapper[5029]: I0313 20:50:38.975557 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85jt\" (UniqueName: \"kubernetes.io/projected/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-kube-api-access-b85jt\") pod \"nova-cell0-conductor-db-sync-mwzf9\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:39 crc kubenswrapper[5029]: I0313 20:50:39.048716 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:50:39 crc kubenswrapper[5029]: W0313 20:50:39.593286 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b4b9eb_002c_49a2_89ef_65fcf9fd4a32.slice/crio-1e9cbd1f128a8f9b8aeab01e0252f9028ff31b4efac122480f69cf1342f524a6 WatchSource:0}: Error finding container 1e9cbd1f128a8f9b8aeab01e0252f9028ff31b4efac122480f69cf1342f524a6: Status 404 returned error can't find the container with id 1e9cbd1f128a8f9b8aeab01e0252f9028ff31b4efac122480f69cf1342f524a6 Mar 13 20:50:39 crc kubenswrapper[5029]: I0313 20:50:39.604250 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mwzf9"] Mar 13 20:50:39 crc kubenswrapper[5029]: I0313 20:50:39.626218 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerStarted","Data":"551c55f4ccd8c7225844cad0d2d207c1a848bbed12f7d3f8f46fe5e9ada54bdb"} Mar 13 20:50:40 crc kubenswrapper[5029]: I0313 20:50:40.657011 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" event={"ID":"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32","Type":"ContainerStarted","Data":"1e9cbd1f128a8f9b8aeab01e0252f9028ff31b4efac122480f69cf1342f524a6"} Mar 13 20:50:41 crc kubenswrapper[5029]: I0313 20:50:41.729297 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerStarted","Data":"adc0788ddd54c41651122b7dec9a8c373edb8d1cf3f335985c269f333ab24508"} Mar 13 20:50:41 crc kubenswrapper[5029]: I0313 20:50:41.731422 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 20:50:41 crc kubenswrapper[5029]: I0313 20:50:41.763439 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.514740534 podStartE2EDuration="6.763418601s" podCreationTimestamp="2026-03-13 20:50:35 +0000 UTC" firstStartedPulling="2026-03-13 20:50:36.780555712 +0000 UTC m=+1396.796638115" lastFinishedPulling="2026-03-13 20:50:41.029233779 +0000 UTC m=+1401.045316182" observedRunningTime="2026-03-13 20:50:41.760247205 +0000 UTC m=+1401.776329618" watchObservedRunningTime="2026-03-13 20:50:41.763418601 +0000 UTC m=+1401.779501004" Mar 13 20:50:49 crc kubenswrapper[5029]: I0313 20:50:49.842068 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" event={"ID":"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32","Type":"ContainerStarted","Data":"ce4533e15a29f2473cb0ee6798c4596c018d71abcc8b7a0ed51b74ce156e4024"} Mar 13 20:50:49 crc kubenswrapper[5029]: I0313 20:50:49.883139 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" podStartSLOduration=2.028790611 podStartE2EDuration="11.88310721s" podCreationTimestamp="2026-03-13 20:50:38 +0000 UTC" firstStartedPulling="2026-03-13 20:50:39.612206585 +0000 UTC m=+1399.628288988" lastFinishedPulling="2026-03-13 20:50:49.466523184 +0000 UTC m=+1409.482605587" observedRunningTime="2026-03-13 20:50:49.87280427 +0000 UTC m=+1409.888886683" watchObservedRunningTime="2026-03-13 20:50:49.88310721 +0000 UTC m=+1409.899189613" Mar 13 20:50:54 crc kubenswrapper[5029]: I0313 20:50:54.650995 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:54 crc kubenswrapper[5029]: I0313 20:50:54.655600 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="ceilometer-central-agent" containerID="cri-o://e1e26a5f92aa9364b4593c1d404f1fdf0c3e5751affc704becbe5569ed535c87" gracePeriod=30 Mar 13 20:50:54 crc kubenswrapper[5029]: I0313 20:50:54.657044 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="ceilometer-notification-agent" containerID="cri-o://eca50e74e990c5d503eb8590b5c3475be991f46735e404f2b50ca967d006e068" gracePeriod=30 Mar 13 20:50:54 crc kubenswrapper[5029]: I0313 20:50:54.657027 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="proxy-httpd" containerID="cri-o://adc0788ddd54c41651122b7dec9a8c373edb8d1cf3f335985c269f333ab24508" gracePeriod=30 Mar 13 20:50:54 crc kubenswrapper[5029]: I0313 20:50:54.657470 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="sg-core" containerID="cri-o://551c55f4ccd8c7225844cad0d2d207c1a848bbed12f7d3f8f46fe5e9ada54bdb" gracePeriod=30 Mar 13 20:50:54 crc kubenswrapper[5029]: I0313 20:50:54.688265 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 13 20:50:55 crc kubenswrapper[5029]: I0313 20:50:55.093473 5029 generic.go:334] "Generic (PLEG): container finished" podID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerID="551c55f4ccd8c7225844cad0d2d207c1a848bbed12f7d3f8f46fe5e9ada54bdb" exitCode=2 Mar 13 20:50:55 crc kubenswrapper[5029]: I0313 20:50:55.093542 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerDied","Data":"551c55f4ccd8c7225844cad0d2d207c1a848bbed12f7d3f8f46fe5e9ada54bdb"} Mar 13 20:50:56 crc kubenswrapper[5029]: I0313 20:50:56.105391 5029 generic.go:334] "Generic (PLEG): container finished" podID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerID="adc0788ddd54c41651122b7dec9a8c373edb8d1cf3f335985c269f333ab24508" exitCode=0 Mar 13 20:50:56 crc kubenswrapper[5029]: I0313 20:50:56.105734 5029 generic.go:334] "Generic (PLEG): container finished" podID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerID="e1e26a5f92aa9364b4593c1d404f1fdf0c3e5751affc704becbe5569ed535c87" exitCode=0 Mar 13 20:50:56 crc kubenswrapper[5029]: I0313 20:50:56.105442 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerDied","Data":"adc0788ddd54c41651122b7dec9a8c373edb8d1cf3f335985c269f333ab24508"} Mar 13 20:50:56 crc kubenswrapper[5029]: I0313 20:50:56.105786 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerDied","Data":"e1e26a5f92aa9364b4593c1d404f1fdf0c3e5751affc704becbe5569ed535c87"} Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.158017 5029 generic.go:334] "Generic (PLEG): container finished" podID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerID="eca50e74e990c5d503eb8590b5c3475be991f46735e404f2b50ca967d006e068" exitCode=0 Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.158311 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerDied","Data":"eca50e74e990c5d503eb8590b5c3475be991f46735e404f2b50ca967d006e068"} Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.376622 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.460314 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-combined-ca-bundle\") pod \"78eeb560-069f-4773-b6be-77e8d34acd2f\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.460379 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w6kh\" (UniqueName: \"kubernetes.io/projected/78eeb560-069f-4773-b6be-77e8d34acd2f-kube-api-access-5w6kh\") pod \"78eeb560-069f-4773-b6be-77e8d34acd2f\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.460476 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-config-data\") pod \"78eeb560-069f-4773-b6be-77e8d34acd2f\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.460494 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-scripts\") pod \"78eeb560-069f-4773-b6be-77e8d34acd2f\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.460530 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-run-httpd\") pod \"78eeb560-069f-4773-b6be-77e8d34acd2f\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.460607 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-sg-core-conf-yaml\") pod \"78eeb560-069f-4773-b6be-77e8d34acd2f\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.460680 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-log-httpd\") pod \"78eeb560-069f-4773-b6be-77e8d34acd2f\" (UID: \"78eeb560-069f-4773-b6be-77e8d34acd2f\") " Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.461353 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78eeb560-069f-4773-b6be-77e8d34acd2f" (UID: "78eeb560-069f-4773-b6be-77e8d34acd2f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.461784 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78eeb560-069f-4773-b6be-77e8d34acd2f" (UID: "78eeb560-069f-4773-b6be-77e8d34acd2f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.466990 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78eeb560-069f-4773-b6be-77e8d34acd2f-kube-api-access-5w6kh" (OuterVolumeSpecName: "kube-api-access-5w6kh") pod "78eeb560-069f-4773-b6be-77e8d34acd2f" (UID: "78eeb560-069f-4773-b6be-77e8d34acd2f"). InnerVolumeSpecName "kube-api-access-5w6kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.467124 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-scripts" (OuterVolumeSpecName: "scripts") pod "78eeb560-069f-4773-b6be-77e8d34acd2f" (UID: "78eeb560-069f-4773-b6be-77e8d34acd2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.527012 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78eeb560-069f-4773-b6be-77e8d34acd2f" (UID: "78eeb560-069f-4773-b6be-77e8d34acd2f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.560771 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78eeb560-069f-4773-b6be-77e8d34acd2f" (UID: "78eeb560-069f-4773-b6be-77e8d34acd2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.564275 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.564307 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.564316 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.564328 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w6kh\" (UniqueName: \"kubernetes.io/projected/78eeb560-069f-4773-b6be-77e8d34acd2f-kube-api-access-5w6kh\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.564341 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.564350 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78eeb560-069f-4773-b6be-77e8d34acd2f-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.608461 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-config-data" (OuterVolumeSpecName: "config-data") pod "78eeb560-069f-4773-b6be-77e8d34acd2f" (UID: "78eeb560-069f-4773-b6be-77e8d34acd2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:50:57 crc kubenswrapper[5029]: I0313 20:50:57.666926 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78eeb560-069f-4773-b6be-77e8d34acd2f-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.188403 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78eeb560-069f-4773-b6be-77e8d34acd2f","Type":"ContainerDied","Data":"2b4e522a52d4ba332624df021f9466d706d32c2eae0350cda102e4d0c366dd8a"} Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.188559 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.188764 5029 scope.go:117] "RemoveContainer" containerID="adc0788ddd54c41651122b7dec9a8c373edb8d1cf3f335985c269f333ab24508" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.215146 5029 scope.go:117] "RemoveContainer" containerID="551c55f4ccd8c7225844cad0d2d207c1a848bbed12f7d3f8f46fe5e9ada54bdb" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.227962 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.243764 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.256440 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:58 crc kubenswrapper[5029]: E0313 20:50:58.256975 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="ceilometer-central-agent" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257001 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="ceilometer-central-agent" Mar 13 20:50:58 crc kubenswrapper[5029]: E0313 20:50:58.257022 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="sg-core" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257035 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="sg-core" Mar 13 20:50:58 crc kubenswrapper[5029]: E0313 20:50:58.257055 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="ceilometer-notification-agent" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257094 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="ceilometer-notification-agent" Mar 13 20:50:58 crc kubenswrapper[5029]: E0313 20:50:58.257109 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="proxy-httpd" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257116 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="proxy-httpd" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257320 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="proxy-httpd" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257344 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="ceilometer-notification-agent" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257356 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="sg-core" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257371 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" containerName="ceilometer-central-agent" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.257554 5029 scope.go:117] "RemoveContainer" containerID="eca50e74e990c5d503eb8590b5c3475be991f46735e404f2b50ca967d006e068" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.261305 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.268191 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.268603 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.280239 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.299177 5029 scope.go:117] "RemoveContainer" containerID="e1e26a5f92aa9364b4593c1d404f1fdf0c3e5751affc704becbe5569ed535c87" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.403032 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-config-data\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.403133 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-log-httpd\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.403407 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.403472 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jj9m\" (UniqueName: \"kubernetes.io/projected/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-kube-api-access-8jj9m\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.403501 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-run-httpd\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.404003 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.404162 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-scripts\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.506659 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.506732 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-scripts\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.506765 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-config-data\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.506794 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-log-httpd\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.506905 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.506927 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jj9m\" (UniqueName: \"kubernetes.io/projected/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-kube-api-access-8jj9m\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.506952 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-run-httpd\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.507660 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-log-httpd\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.507682 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-run-httpd\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.514918 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.516116 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.516558 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-config-data\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.521936 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-scripts\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.525635 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jj9m\" (UniqueName: \"kubernetes.io/projected/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-kube-api-access-8jj9m\") pod \"ceilometer-0\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " pod="openstack/ceilometer-0" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.612777 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78eeb560-069f-4773-b6be-77e8d34acd2f" path="/var/lib/kubelet/pods/78eeb560-069f-4773-b6be-77e8d34acd2f/volumes" Mar 13 20:50:58 crc kubenswrapper[5029]: I0313 20:50:58.618215 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:50:59 crc kubenswrapper[5029]: I0313 20:50:59.102104 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:50:59 crc kubenswrapper[5029]: I0313 20:50:59.203046 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerStarted","Data":"ffd0f66b0505413e3d73d78033948d67c79b87d7b39456b1229b19f9e0989b8e"} Mar 13 20:51:00 crc kubenswrapper[5029]: I0313 20:51:00.219181 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerStarted","Data":"7501ce3787ad259910a4dd5f014c8bfed9a1b2645235474f868461e4e63ef402"} Mar 13 20:51:01 crc kubenswrapper[5029]: I0313 20:51:01.252806 5029 generic.go:334] "Generic (PLEG): container finished" podID="00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" containerID="ce4533e15a29f2473cb0ee6798c4596c018d71abcc8b7a0ed51b74ce156e4024" exitCode=0 Mar 13 20:51:01 crc kubenswrapper[5029]: I0313 20:51:01.253305 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" event={"ID":"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32","Type":"ContainerDied","Data":"ce4533e15a29f2473cb0ee6798c4596c018d71abcc8b7a0ed51b74ce156e4024"} Mar 13 20:51:01 crc kubenswrapper[5029]: I0313 20:51:01.255278 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerStarted","Data":"45b72e2b0db3c5b81f5c7582f6e0ea7b35e46f79855b2ad9bfba82d06c890d63"} Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.740730 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.898933 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-combined-ca-bundle\") pod \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.900415 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b85jt\" (UniqueName: \"kubernetes.io/projected/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-kube-api-access-b85jt\") pod \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.900567 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-config-data\") pod \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.900754 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-scripts\") pod \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\" (UID: \"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32\") " Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.904634 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-scripts" (OuterVolumeSpecName: "scripts") pod "00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" (UID: "00b4b9eb-002c-49a2-89ef-65fcf9fd4a32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.905326 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-kube-api-access-b85jt" (OuterVolumeSpecName: "kube-api-access-b85jt") pod "00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" (UID: "00b4b9eb-002c-49a2-89ef-65fcf9fd4a32"). InnerVolumeSpecName "kube-api-access-b85jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.931128 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" (UID: "00b4b9eb-002c-49a2-89ef-65fcf9fd4a32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:02 crc kubenswrapper[5029]: I0313 20:51:02.947413 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-config-data" (OuterVolumeSpecName: "config-data") pod "00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" (UID: "00b4b9eb-002c-49a2-89ef-65fcf9fd4a32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.003267 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.003316 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b85jt\" (UniqueName: \"kubernetes.io/projected/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-kube-api-access-b85jt\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.003331 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.003340 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.281187 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerStarted","Data":"cf84df7733c24482cd3931d0f5871c77b07206f191412540c9fedfeeb421f913"} Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.283723 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" event={"ID":"00b4b9eb-002c-49a2-89ef-65fcf9fd4a32","Type":"ContainerDied","Data":"1e9cbd1f128a8f9b8aeab01e0252f9028ff31b4efac122480f69cf1342f524a6"} Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.283762 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e9cbd1f128a8f9b8aeab01e0252f9028ff31b4efac122480f69cf1342f524a6" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.283836 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mwzf9" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.405962 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 20:51:03 crc kubenswrapper[5029]: E0313 20:51:03.406691 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" containerName="nova-cell0-conductor-db-sync" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.406718 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" containerName="nova-cell0-conductor-db-sync" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.407022 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" containerName="nova-cell0-conductor-db-sync" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.408082 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.411231 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pqbfx" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.413475 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.418555 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.514762 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c032309-b0f6-4917-8e27-6e39bc22f646-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.515010 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c032309-b0f6-4917-8e27-6e39bc22f646-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.515152 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvz2\" (UniqueName: \"kubernetes.io/projected/0c032309-b0f6-4917-8e27-6e39bc22f646-kube-api-access-fxvz2\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.618102 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c032309-b0f6-4917-8e27-6e39bc22f646-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.618732 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxvz2\" (UniqueName: \"kubernetes.io/projected/0c032309-b0f6-4917-8e27-6e39bc22f646-kube-api-access-fxvz2\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.619031 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c032309-b0f6-4917-8e27-6e39bc22f646-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.626153 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c032309-b0f6-4917-8e27-6e39bc22f646-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.627740 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c032309-b0f6-4917-8e27-6e39bc22f646-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.641453 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxvz2\" (UniqueName: \"kubernetes.io/projected/0c032309-b0f6-4917-8e27-6e39bc22f646-kube-api-access-fxvz2\") pod \"nova-cell0-conductor-0\" (UID: \"0c032309-b0f6-4917-8e27-6e39bc22f646\") " pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:03 crc kubenswrapper[5029]: I0313 20:51:03.743424 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:04 crc kubenswrapper[5029]: W0313 20:51:04.244503 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c032309_b0f6_4917_8e27_6e39bc22f646.slice/crio-ea65fbfb8cb8018274955b732a2b6d4c8ff6d12e0501719528271c08622de1f5 WatchSource:0}: Error finding container ea65fbfb8cb8018274955b732a2b6d4c8ff6d12e0501719528271c08622de1f5: Status 404 returned error can't find the container with id ea65fbfb8cb8018274955b732a2b6d4c8ff6d12e0501719528271c08622de1f5 Mar 13 20:51:04 crc kubenswrapper[5029]: I0313 20:51:04.248646 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 20:51:04 crc kubenswrapper[5029]: I0313 20:51:04.294447 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0c032309-b0f6-4917-8e27-6e39bc22f646","Type":"ContainerStarted","Data":"ea65fbfb8cb8018274955b732a2b6d4c8ff6d12e0501719528271c08622de1f5"} Mar 13 20:51:05 crc kubenswrapper[5029]: I0313 20:51:05.304319 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0c032309-b0f6-4917-8e27-6e39bc22f646","Type":"ContainerStarted","Data":"4d3e40dcae15b9f9a32f99e508ed589ccfbd79e1163aecac7d4c240e7e908a42"} Mar 13 20:51:05 crc kubenswrapper[5029]: I0313 20:51:05.304885 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:05 crc kubenswrapper[5029]: I0313 20:51:05.306661 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerStarted","Data":"924fff1b8ed5903434680277adb181b7483b716961296b65e37def4eb3e1ab15"} Mar 13 20:51:05 crc kubenswrapper[5029]: I0313 20:51:05.306869 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 20:51:05 crc kubenswrapper[5029]: I0313 20:51:05.330819 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.330800957 podStartE2EDuration="2.330800957s" podCreationTimestamp="2026-03-13 20:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:05.325178994 +0000 UTC m=+1425.341261397" watchObservedRunningTime="2026-03-13 20:51:05.330800957 +0000 UTC m=+1425.346883360" Mar 13 20:51:05 crc kubenswrapper[5029]: I0313 20:51:05.355976 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9424297529999999 podStartE2EDuration="7.355940383s" podCreationTimestamp="2026-03-13 20:50:58 +0000 UTC" firstStartedPulling="2026-03-13 20:50:59.115334796 +0000 UTC m=+1419.131417199" lastFinishedPulling="2026-03-13 20:51:04.528845426 +0000 UTC m=+1424.544927829" observedRunningTime="2026-03-13 20:51:05.345912949 +0000 UTC m=+1425.361995372" watchObservedRunningTime="2026-03-13 20:51:05.355940383 +0000 UTC m=+1425.372022786" Mar 13 20:51:07 crc kubenswrapper[5029]: I0313 20:51:07.783046 5029 scope.go:117] "RemoveContainer" containerID="20b28818464ddc70db1b2cd377a70d954e626278e81423aee0e2982033924bb1" Mar 13 20:51:13 crc kubenswrapper[5029]: I0313 20:51:13.779169 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.389460 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nds9k"] Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.390782 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.393606 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.405560 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.410263 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nds9k"] Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.482496 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.482612 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk76w\" (UniqueName: \"kubernetes.io/projected/c8b9ff74-525b-4376-91b3-8ca127d7174a-kube-api-access-fk76w\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.483123 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-scripts\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.483182 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-config-data\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.589347 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-scripts\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.589414 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-config-data\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.589517 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.589571 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk76w\" (UniqueName: \"kubernetes.io/projected/c8b9ff74-525b-4376-91b3-8ca127d7174a-kube-api-access-fk76w\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.606382 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-config-data\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.607504 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-scripts\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.627787 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.667560 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk76w\" (UniqueName: \"kubernetes.io/projected/c8b9ff74-525b-4376-91b3-8ca127d7174a-kube-api-access-fk76w\") pod \"nova-cell0-cell-mapping-nds9k\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.712347 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.777268 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.778884 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.786063 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.809074 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.841503 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.869094 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.869198 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.900581 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.900652 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-config-data\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.900836 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdlpb\" (UniqueName: \"kubernetes.io/projected/512ae667-b970-4cf6-839b-d0d730bbf3a2-kube-api-access-wdlpb\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.930499 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.933623 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:51:14 crc kubenswrapper[5029]: I0313 20:51:14.943256 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.020588 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021030 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-config-data\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021076 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-config-data\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021189 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdlpb\" (UniqueName: \"kubernetes.io/projected/512ae667-b970-4cf6-839b-d0d730bbf3a2-kube-api-access-wdlpb\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021213 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vk4p\" (UniqueName: \"kubernetes.io/projected/353919e9-7af7-4643-89c1-dd7b66b425e4-kube-api-access-9vk4p\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021252 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6343032-288c-493a-9a01-2595afb05818-logs\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021280 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5lt\" (UniqueName: \"kubernetes.io/projected/c6343032-288c-493a-9a01-2595afb05818-kube-api-access-9n5lt\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021322 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353919e9-7af7-4643-89c1-dd7b66b425e4-logs\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021353 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021377 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-config-data\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.021397 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.073888 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.075491 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.087688 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-config-data\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.095445 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdlpb\" (UniqueName: \"kubernetes.io/projected/512ae667-b970-4cf6-839b-d0d730bbf3a2-kube-api-access-wdlpb\") pod \"nova-scheduler-0\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.123974 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-config-data\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.124245 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vk4p\" (UniqueName: \"kubernetes.io/projected/353919e9-7af7-4643-89c1-dd7b66b425e4-kube-api-access-9vk4p\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.124312 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6343032-288c-493a-9a01-2595afb05818-logs\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.124356 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n5lt\" (UniqueName: \"kubernetes.io/projected/c6343032-288c-493a-9a01-2595afb05818-kube-api-access-9n5lt\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.124414 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353919e9-7af7-4643-89c1-dd7b66b425e4-logs\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.124482 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.124542 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.124644 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-config-data\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.136740 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6343032-288c-493a-9a01-2595afb05818-logs\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.137953 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353919e9-7af7-4643-89c1-dd7b66b425e4-logs\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.141366 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-config-data\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.143950 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.145225 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.145939 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.166761 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-config-data\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.194000 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n5lt\" (UniqueName: \"kubernetes.io/projected/c6343032-288c-493a-9a01-2595afb05818-kube-api-access-9n5lt\") pod \"nova-metadata-0\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.240013 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vk4p\" (UniqueName: \"kubernetes.io/projected/353919e9-7af7-4643-89c1-dd7b66b425e4-kube-api-access-9vk4p\") pod \"nova-api-0\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.241604 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.242443 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.314471 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.364613 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.377599 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.390525 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.456952 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.474107 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.474303 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqdh\" (UniqueName: \"kubernetes.io/projected/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-kube-api-access-4tqdh\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.474361 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.503312 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-72rqn"] Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.505175 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.534553 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-72rqn"] Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579067 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579149 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579201 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rznwh\" (UniqueName: \"kubernetes.io/projected/72c716ac-a862-41c9-be07-07d0df558b07-kube-api-access-rznwh\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579274 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-config\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579365 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqdh\" (UniqueName: \"kubernetes.io/projected/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-kube-api-access-4tqdh\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579424 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579462 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579522 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-svc\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.579564 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.680016 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.682695 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.684922 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.690690 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqdh\" (UniqueName: \"kubernetes.io/projected/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-kube-api-access-4tqdh\") pod \"nova-cell1-novncproxy-0\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.691278 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-svc\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.691505 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.691592 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.691709 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rznwh\" (UniqueName: \"kubernetes.io/projected/72c716ac-a862-41c9-be07-07d0df558b07-kube-api-access-rznwh\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.691941 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-config\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.697576 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.704821 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.705547 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-svc\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.709639 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-config\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.711121 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.733065 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rznwh\" (UniqueName: \"kubernetes.io/projected/72c716ac-a862-41c9-be07-07d0df558b07-kube-api-access-rznwh\") pod \"dnsmasq-dns-6b6c754dc9-72rqn\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.854423 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.887035 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:15 crc kubenswrapper[5029]: I0313 20:51:15.930363 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nds9k"] Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.268204 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:16 crc kubenswrapper[5029]: W0313 20:51:16.278469 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6343032_288c_493a_9a01_2595afb05818.slice/crio-fffc6e2a6ef1634fc40609d658e28b15f96360bd411588885b9cbf0fc878874b WatchSource:0}: Error finding container fffc6e2a6ef1634fc40609d658e28b15f96360bd411588885b9cbf0fc878874b: Status 404 returned error can't find the container with id fffc6e2a6ef1634fc40609d658e28b15f96360bd411588885b9cbf0fc878874b Mar 13 20:51:16 crc kubenswrapper[5029]: W0313 20:51:16.281190 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod512ae667_b970_4cf6_839b_d0d730bbf3a2.slice/crio-f77732d41762acd6131a90a3d1392f9d3ff1d3b1b38a17921beccf8845fe5c2d WatchSource:0}: Error finding container f77732d41762acd6131a90a3d1392f9d3ff1d3b1b38a17921beccf8845fe5c2d: Status 404 returned error can't find the container with id f77732d41762acd6131a90a3d1392f9d3ff1d3b1b38a17921beccf8845fe5c2d Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.292639 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.408486 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nhlxw"] Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.410218 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.413706 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.415012 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.423534 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nhlxw"] Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.526245 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.526351 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-scripts\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.526394 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-config-data\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.526422 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnpj2\" (UniqueName: \"kubernetes.io/projected/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-kube-api-access-fnpj2\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.563290 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nds9k" event={"ID":"c8b9ff74-525b-4376-91b3-8ca127d7174a","Type":"ContainerStarted","Data":"b06e3cf6da382dfe5b77c9b04c7e63c9d3c18f05bec4f9e6527a596c99b57745"} Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.563355 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nds9k" event={"ID":"c8b9ff74-525b-4376-91b3-8ca127d7174a","Type":"ContainerStarted","Data":"52ff63686dd4cf929b2c4ccba404e7e0e1ec5c9c3d93e686e8d4cfc1b72b4dc1"} Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.572059 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"512ae667-b970-4cf6-839b-d0d730bbf3a2","Type":"ContainerStarted","Data":"f77732d41762acd6131a90a3d1392f9d3ff1d3b1b38a17921beccf8845fe5c2d"} Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.590803 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nds9k" podStartSLOduration=2.590774623 podStartE2EDuration="2.590774623s" podCreationTimestamp="2026-03-13 20:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:16.588153132 +0000 UTC m=+1436.604235535" watchObservedRunningTime="2026-03-13 20:51:16.590774623 +0000 UTC m=+1436.606857026" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.592081 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6343032-288c-493a-9a01-2595afb05818","Type":"ContainerStarted","Data":"fffc6e2a6ef1634fc40609d658e28b15f96360bd411588885b9cbf0fc878874b"} Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.638843 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.648452 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.650166 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-scripts\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.650351 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-config-data\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.650451 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnpj2\" (UniqueName: \"kubernetes.io/projected/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-kube-api-access-fnpj2\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.660036 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-scripts\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.661084 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-config-data\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.671264 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:16 crc kubenswrapper[5029]: W0313 20:51:16.688291 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353919e9_7af7_4643_89c1_dd7b66b425e4.slice/crio-38459d4f854c53b0b32c8b3963f34314d84a5eb2f97f3fd0aedc41f5aba46c3b WatchSource:0}: Error finding container 38459d4f854c53b0b32c8b3963f34314d84a5eb2f97f3fd0aedc41f5aba46c3b: Status 404 returned error can't find the container with id 38459d4f854c53b0b32c8b3963f34314d84a5eb2f97f3fd0aedc41f5aba46c3b Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.709590 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnpj2\" (UniqueName: \"kubernetes.io/projected/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-kube-api-access-fnpj2\") pod \"nova-cell1-conductor-db-sync-nhlxw\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.740160 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.790907 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-72rqn"] Mar 13 20:51:16 crc kubenswrapper[5029]: I0313 20:51:16.866502 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:17 crc kubenswrapper[5029]: I0313 20:51:17.509408 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nhlxw"] Mar 13 20:51:17 crc kubenswrapper[5029]: W0313 20:51:17.535198 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9a1a6da_0bb6_4002_96f3_2b4275db33f0.slice/crio-f62522dc6eda9b8a2c92ef8b5d814b73bd2b9f5b190c5fc0249d08537536eed8 WatchSource:0}: Error finding container f62522dc6eda9b8a2c92ef8b5d814b73bd2b9f5b190c5fc0249d08537536eed8: Status 404 returned error can't find the container with id f62522dc6eda9b8a2c92ef8b5d814b73bd2b9f5b190c5fc0249d08537536eed8 Mar 13 20:51:17 crc kubenswrapper[5029]: I0313 20:51:17.630159 5029 generic.go:334] "Generic (PLEG): container finished" podID="72c716ac-a862-41c9-be07-07d0df558b07" containerID="58e09f9a565e494930eabdd497fa2dc44718e2e3ff1080dd87942f6a344a3c8b" exitCode=0 Mar 13 20:51:17 crc kubenswrapper[5029]: I0313 20:51:17.630272 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" event={"ID":"72c716ac-a862-41c9-be07-07d0df558b07","Type":"ContainerDied","Data":"58e09f9a565e494930eabdd497fa2dc44718e2e3ff1080dd87942f6a344a3c8b"} Mar 13 20:51:17 crc kubenswrapper[5029]: I0313 20:51:17.630312 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" event={"ID":"72c716ac-a862-41c9-be07-07d0df558b07","Type":"ContainerStarted","Data":"dee6f94d2b13dc7af83e907487e74967dcfd2f0588a406e15939612a22de240e"} Mar 13 20:51:17 crc kubenswrapper[5029]: I0313 20:51:17.638623 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bac0608b-f7a8-45e2-9dae-a5cd1623f6db","Type":"ContainerStarted","Data":"d5585615df1b4aa26cf1795d9080b7374ac0eb3b393fca340e4f15d808f34bc0"} Mar 13 20:51:17 crc kubenswrapper[5029]: I0313 20:51:17.654775 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353919e9-7af7-4643-89c1-dd7b66b425e4","Type":"ContainerStarted","Data":"38459d4f854c53b0b32c8b3963f34314d84a5eb2f97f3fd0aedc41f5aba46c3b"} Mar 13 20:51:17 crc kubenswrapper[5029]: I0313 20:51:17.684507 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" event={"ID":"b9a1a6da-0bb6-4002-96f3-2b4275db33f0","Type":"ContainerStarted","Data":"f62522dc6eda9b8a2c92ef8b5d814b73bd2b9f5b190c5fc0249d08537536eed8"} Mar 13 20:51:18 crc kubenswrapper[5029]: I0313 20:51:18.786771 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" event={"ID":"b9a1a6da-0bb6-4002-96f3-2b4275db33f0","Type":"ContainerStarted","Data":"40fecae17729b30acfe9ac26f9d5aa494dfd0ac15b787ab6f3d6f3a5a46a741f"} Mar 13 20:51:18 crc kubenswrapper[5029]: I0313 20:51:18.811806 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" event={"ID":"72c716ac-a862-41c9-be07-07d0df558b07","Type":"ContainerStarted","Data":"9bfa6410ff244af78bfaa062053c693be7475335d374271b72f5ed6ad5b5175c"} Mar 13 20:51:18 crc kubenswrapper[5029]: I0313 20:51:18.813042 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:18 crc kubenswrapper[5029]: I0313 20:51:18.902047 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" podStartSLOduration=2.902014517 podStartE2EDuration="2.902014517s" podCreationTimestamp="2026-03-13 20:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:18.852253801 +0000 UTC m=+1438.868336194" watchObservedRunningTime="2026-03-13 20:51:18.902014517 +0000 UTC m=+1438.918096920" Mar 13 20:51:18 crc kubenswrapper[5029]: I0313 20:51:18.911662 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" podStartSLOduration=3.91163244 podStartE2EDuration="3.91163244s" podCreationTimestamp="2026-03-13 20:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:18.898099841 +0000 UTC m=+1438.914182254" watchObservedRunningTime="2026-03-13 20:51:18.91163244 +0000 UTC m=+1438.927714853" Mar 13 20:51:20 crc kubenswrapper[5029]: I0313 20:51:20.248106 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:20 crc kubenswrapper[5029]: I0313 20:51:20.284814 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.871782 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"512ae667-b970-4cf6-839b-d0d730bbf3a2","Type":"ContainerStarted","Data":"517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd"} Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.878100 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bac0608b-f7a8-45e2-9dae-a5cd1623f6db","Type":"ContainerStarted","Data":"b99729606e95ff8ba7c3943c0cd738121474ba8d694f5297382de70bb267ac8c"} Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.878723 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="bac0608b-f7a8-45e2-9dae-a5cd1623f6db" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b99729606e95ff8ba7c3943c0cd738121474ba8d694f5297382de70bb267ac8c" gracePeriod=30 Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.884048 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6343032-288c-493a-9a01-2595afb05818","Type":"ContainerStarted","Data":"2113b4c7052640b04abaeac4661ed3b21b11aa11f8963cd632dfbbc21d79a667"} Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.884248 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6343032-288c-493a-9a01-2595afb05818","Type":"ContainerStarted","Data":"23f50f0ba8a8e0fafb1d784f3c3ce17bbf23f89ed3df191fc166a0b3978d35ea"} Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.884205 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c6343032-288c-493a-9a01-2595afb05818" containerName="nova-metadata-metadata" containerID="cri-o://2113b4c7052640b04abaeac4661ed3b21b11aa11f8963cd632dfbbc21d79a667" gracePeriod=30 Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.884127 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c6343032-288c-493a-9a01-2595afb05818" containerName="nova-metadata-log" containerID="cri-o://23f50f0ba8a8e0fafb1d784f3c3ce17bbf23f89ed3df191fc166a0b3978d35ea" gracePeriod=30 Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.896504 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.053564479 podStartE2EDuration="8.896471976s" podCreationTimestamp="2026-03-13 20:51:14 +0000 UTC" firstStartedPulling="2026-03-13 20:51:16.304792818 +0000 UTC m=+1436.320875221" lastFinishedPulling="2026-03-13 20:51:22.147700315 +0000 UTC m=+1442.163782718" observedRunningTime="2026-03-13 20:51:22.891811018 +0000 UTC m=+1442.907893421" watchObservedRunningTime="2026-03-13 20:51:22.896471976 +0000 UTC m=+1442.912554369" Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.899888 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353919e9-7af7-4643-89c1-dd7b66b425e4","Type":"ContainerStarted","Data":"cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171"} Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.899981 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353919e9-7af7-4643-89c1-dd7b66b425e4","Type":"ContainerStarted","Data":"6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765"} Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.922481 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.6661282870000003 podStartE2EDuration="8.922456484s" podCreationTimestamp="2026-03-13 20:51:14 +0000 UTC" firstStartedPulling="2026-03-13 20:51:16.890257187 +0000 UTC m=+1436.906339590" lastFinishedPulling="2026-03-13 20:51:22.146585384 +0000 UTC m=+1442.162667787" observedRunningTime="2026-03-13 20:51:22.91057005 +0000 UTC m=+1442.926652453" watchObservedRunningTime="2026-03-13 20:51:22.922456484 +0000 UTC m=+1442.938538887" Mar 13 20:51:22 crc kubenswrapper[5029]: I0313 20:51:22.944266 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.105921276 podStartE2EDuration="8.944243998s" podCreationTimestamp="2026-03-13 20:51:14 +0000 UTC" firstStartedPulling="2026-03-13 20:51:16.304074128 +0000 UTC m=+1436.320156531" lastFinishedPulling="2026-03-13 20:51:22.14239685 +0000 UTC m=+1442.158479253" observedRunningTime="2026-03-13 20:51:22.939685694 +0000 UTC m=+1442.955768097" watchObservedRunningTime="2026-03-13 20:51:22.944243998 +0000 UTC m=+1442.960326401" Mar 13 20:51:23 crc kubenswrapper[5029]: I0313 20:51:23.912387 5029 generic.go:334] "Generic (PLEG): container finished" podID="c6343032-288c-493a-9a01-2595afb05818" containerID="23f50f0ba8a8e0fafb1d784f3c3ce17bbf23f89ed3df191fc166a0b3978d35ea" exitCode=143 Mar 13 20:51:23 crc kubenswrapper[5029]: I0313 20:51:23.912664 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6343032-288c-493a-9a01-2595afb05818","Type":"ContainerDied","Data":"23f50f0ba8a8e0fafb1d784f3c3ce17bbf23f89ed3df191fc166a0b3978d35ea"} Mar 13 20:51:25 crc kubenswrapper[5029]: I0313 20:51:25.243657 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 20:51:25 crc kubenswrapper[5029]: I0313 20:51:25.244754 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 20:51:25 crc kubenswrapper[5029]: I0313 20:51:25.280128 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 20:51:25 crc kubenswrapper[5029]: I0313 20:51:25.315585 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 20:51:25 crc kubenswrapper[5029]: I0313 20:51:25.315661 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 20:51:25 crc kubenswrapper[5029]: I0313 20:51:25.317008 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.880402438 podStartE2EDuration="11.316986618s" podCreationTimestamp="2026-03-13 20:51:14 +0000 UTC" firstStartedPulling="2026-03-13 20:51:16.709027747 +0000 UTC m=+1436.725110150" lastFinishedPulling="2026-03-13 20:51:22.145611927 +0000 UTC m=+1442.161694330" observedRunningTime="2026-03-13 20:51:23.000008138 +0000 UTC m=+1443.016090541" watchObservedRunningTime="2026-03-13 20:51:25.316986618 +0000 UTC m=+1445.333069021" Mar 13 20:51:25 crc kubenswrapper[5029]: I0313 20:51:25.856001 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:25 crc kubenswrapper[5029]: I0313 20:51:25.888018 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:25.997430 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-hrh96"] Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:25.997813 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56696ff475-hrh96" podUID="21bfc307-8188-473c-8dc6-d24acb8f0694" containerName="dnsmasq-dns" containerID="cri-o://d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f" gracePeriod=10 Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.006238 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.399188 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.399648 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.615620 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.727417 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-config\") pod \"21bfc307-8188-473c-8dc6-d24acb8f0694\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.727735 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-nb\") pod \"21bfc307-8188-473c-8dc6-d24acb8f0694\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.727901 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-swift-storage-0\") pod \"21bfc307-8188-473c-8dc6-d24acb8f0694\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.728174 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-svc\") pod \"21bfc307-8188-473c-8dc6-d24acb8f0694\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.728231 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t4pr\" (UniqueName: \"kubernetes.io/projected/21bfc307-8188-473c-8dc6-d24acb8f0694-kube-api-access-8t4pr\") pod \"21bfc307-8188-473c-8dc6-d24acb8f0694\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.728267 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-sb\") pod \"21bfc307-8188-473c-8dc6-d24acb8f0694\" (UID: \"21bfc307-8188-473c-8dc6-d24acb8f0694\") " Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.736163 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21bfc307-8188-473c-8dc6-d24acb8f0694-kube-api-access-8t4pr" (OuterVolumeSpecName: "kube-api-access-8t4pr") pod "21bfc307-8188-473c-8dc6-d24acb8f0694" (UID: "21bfc307-8188-473c-8dc6-d24acb8f0694"). InnerVolumeSpecName "kube-api-access-8t4pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.791103 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21bfc307-8188-473c-8dc6-d24acb8f0694" (UID: "21bfc307-8188-473c-8dc6-d24acb8f0694"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.799393 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21bfc307-8188-473c-8dc6-d24acb8f0694" (UID: "21bfc307-8188-473c-8dc6-d24acb8f0694"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.799695 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-config" (OuterVolumeSpecName: "config") pod "21bfc307-8188-473c-8dc6-d24acb8f0694" (UID: "21bfc307-8188-473c-8dc6-d24acb8f0694"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.800534 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21bfc307-8188-473c-8dc6-d24acb8f0694" (UID: "21bfc307-8188-473c-8dc6-d24acb8f0694"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.811482 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "21bfc307-8188-473c-8dc6-d24acb8f0694" (UID: "21bfc307-8188-473c-8dc6-d24acb8f0694"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.832387 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.832430 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t4pr\" (UniqueName: \"kubernetes.io/projected/21bfc307-8188-473c-8dc6-d24acb8f0694-kube-api-access-8t4pr\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.832447 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.832462 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.832471 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.832479 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21bfc307-8188-473c-8dc6-d24acb8f0694-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.946216 5029 generic.go:334] "Generic (PLEG): container finished" podID="21bfc307-8188-473c-8dc6-d24acb8f0694" containerID="d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f" exitCode=0 Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.947532 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56696ff475-hrh96" Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.959226 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-hrh96" event={"ID":"21bfc307-8188-473c-8dc6-d24acb8f0694","Type":"ContainerDied","Data":"d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f"} Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.959317 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-hrh96" event={"ID":"21bfc307-8188-473c-8dc6-d24acb8f0694","Type":"ContainerDied","Data":"1fb3eaf5ddab8bf90f62cd933a5fbce8c855510466287159071178777bc773fe"} Mar 13 20:51:26 crc kubenswrapper[5029]: I0313 20:51:26.959352 5029 scope.go:117] "RemoveContainer" containerID="d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f" Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.000313 5029 scope.go:117] "RemoveContainer" containerID="9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c" Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.037230 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-hrh96"] Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.040534 5029 scope.go:117] "RemoveContainer" containerID="d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f" Mar 13 20:51:27 crc kubenswrapper[5029]: E0313 20:51:27.041157 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f\": container with ID starting with d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f not found: ID does not exist" containerID="d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f" Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.041249 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f"} err="failed to get container status \"d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f\": rpc error: code = NotFound desc = could not find container \"d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f\": container with ID starting with d41b586b6715f64d57fc2d5fcf03ddc2dbfd1c6e4b834935619f844056c2aa5f not found: ID does not exist" Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.041284 5029 scope.go:117] "RemoveContainer" containerID="9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c" Mar 13 20:51:27 crc kubenswrapper[5029]: E0313 20:51:27.041810 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c\": container with ID starting with 9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c not found: ID does not exist" containerID="9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c" Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.041967 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c"} err="failed to get container status \"9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c\": rpc error: code = NotFound desc = could not find container \"9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c\": container with ID starting with 9e4ec50bbc43d14ad56c742430a98030b9e54081f1dab108edf3f49c3ccd0c9c not found: ID does not exist" Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.047603 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-hrh96"] Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.960109 5029 generic.go:334] "Generic (PLEG): container finished" podID="b9a1a6da-0bb6-4002-96f3-2b4275db33f0" containerID="40fecae17729b30acfe9ac26f9d5aa494dfd0ac15b787ab6f3d6f3a5a46a741f" exitCode=0 Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.960153 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" event={"ID":"b9a1a6da-0bb6-4002-96f3-2b4275db33f0","Type":"ContainerDied","Data":"40fecae17729b30acfe9ac26f9d5aa494dfd0ac15b787ab6f3d6f3a5a46a741f"} Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.964828 5029 generic.go:334] "Generic (PLEG): container finished" podID="c8b9ff74-525b-4376-91b3-8ca127d7174a" containerID="b06e3cf6da382dfe5b77c9b04c7e63c9d3c18f05bec4f9e6527a596c99b57745" exitCode=0 Mar 13 20:51:27 crc kubenswrapper[5029]: I0313 20:51:27.964920 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nds9k" event={"ID":"c8b9ff74-525b-4376-91b3-8ca127d7174a","Type":"ContainerDied","Data":"b06e3cf6da382dfe5b77c9b04c7e63c9d3c18f05bec4f9e6527a596c99b57745"} Mar 13 20:51:28 crc kubenswrapper[5029]: I0313 20:51:28.612376 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21bfc307-8188-473c-8dc6-d24acb8f0694" path="/var/lib/kubelet/pods/21bfc307-8188-473c-8dc6-d24acb8f0694/volumes" Mar 13 20:51:28 crc kubenswrapper[5029]: I0313 20:51:28.636327 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.569310 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.574202 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.673763 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk76w\" (UniqueName: \"kubernetes.io/projected/c8b9ff74-525b-4376-91b3-8ca127d7174a-kube-api-access-fk76w\") pod \"c8b9ff74-525b-4376-91b3-8ca127d7174a\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.673840 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-combined-ca-bundle\") pod \"c8b9ff74-525b-4376-91b3-8ca127d7174a\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.673977 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-scripts\") pod \"c8b9ff74-525b-4376-91b3-8ca127d7174a\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.674106 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-config-data\") pod \"c8b9ff74-525b-4376-91b3-8ca127d7174a\" (UID: \"c8b9ff74-525b-4376-91b3-8ca127d7174a\") " Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.684034 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b9ff74-525b-4376-91b3-8ca127d7174a-kube-api-access-fk76w" (OuterVolumeSpecName: "kube-api-access-fk76w") pod "c8b9ff74-525b-4376-91b3-8ca127d7174a" (UID: "c8b9ff74-525b-4376-91b3-8ca127d7174a"). InnerVolumeSpecName "kube-api-access-fk76w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.684035 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-scripts" (OuterVolumeSpecName: "scripts") pod "c8b9ff74-525b-4376-91b3-8ca127d7174a" (UID: "c8b9ff74-525b-4376-91b3-8ca127d7174a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.711070 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8b9ff74-525b-4376-91b3-8ca127d7174a" (UID: "c8b9ff74-525b-4376-91b3-8ca127d7174a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.714564 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-config-data" (OuterVolumeSpecName: "config-data") pod "c8b9ff74-525b-4376-91b3-8ca127d7174a" (UID: "c8b9ff74-525b-4376-91b3-8ca127d7174a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.776107 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-combined-ca-bundle\") pod \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.776790 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-scripts\") pod \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.776941 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-config-data\") pod \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.777103 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnpj2\" (UniqueName: \"kubernetes.io/projected/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-kube-api-access-fnpj2\") pod \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\" (UID: \"b9a1a6da-0bb6-4002-96f3-2b4275db33f0\") " Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.777921 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.778018 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk76w\" (UniqueName: \"kubernetes.io/projected/c8b9ff74-525b-4376-91b3-8ca127d7174a-kube-api-access-fk76w\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.778096 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.778173 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8b9ff74-525b-4376-91b3-8ca127d7174a-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.780743 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-kube-api-access-fnpj2" (OuterVolumeSpecName: "kube-api-access-fnpj2") pod "b9a1a6da-0bb6-4002-96f3-2b4275db33f0" (UID: "b9a1a6da-0bb6-4002-96f3-2b4275db33f0"). InnerVolumeSpecName "kube-api-access-fnpj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.780911 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-scripts" (OuterVolumeSpecName: "scripts") pod "b9a1a6da-0bb6-4002-96f3-2b4275db33f0" (UID: "b9a1a6da-0bb6-4002-96f3-2b4275db33f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.806744 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-config-data" (OuterVolumeSpecName: "config-data") pod "b9a1a6da-0bb6-4002-96f3-2b4275db33f0" (UID: "b9a1a6da-0bb6-4002-96f3-2b4275db33f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.807363 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9a1a6da-0bb6-4002-96f3-2b4275db33f0" (UID: "b9a1a6da-0bb6-4002-96f3-2b4275db33f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.880094 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.880139 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.880155 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnpj2\" (UniqueName: \"kubernetes.io/projected/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-kube-api-access-fnpj2\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.880168 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9a1a6da-0bb6-4002-96f3-2b4275db33f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.990441 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" event={"ID":"b9a1a6da-0bb6-4002-96f3-2b4275db33f0","Type":"ContainerDied","Data":"f62522dc6eda9b8a2c92ef8b5d814b73bd2b9f5b190c5fc0249d08537536eed8"} Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.990494 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62522dc6eda9b8a2c92ef8b5d814b73bd2b9f5b190c5fc0249d08537536eed8" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.990497 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nhlxw" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.997277 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nds9k" event={"ID":"c8b9ff74-525b-4376-91b3-8ca127d7174a","Type":"ContainerDied","Data":"52ff63686dd4cf929b2c4ccba404e7e0e1ec5c9c3d93e686e8d4cfc1b72b4dc1"} Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.997334 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52ff63686dd4cf929b2c4ccba404e7e0e1ec5c9c3d93e686e8d4cfc1b72b4dc1" Mar 13 20:51:29 crc kubenswrapper[5029]: I0313 20:51:29.997289 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nds9k" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.124776 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 20:51:30 crc kubenswrapper[5029]: E0313 20:51:30.130230 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21bfc307-8188-473c-8dc6-d24acb8f0694" containerName="init" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.130263 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="21bfc307-8188-473c-8dc6-d24acb8f0694" containerName="init" Mar 13 20:51:30 crc kubenswrapper[5029]: E0313 20:51:30.130282 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b9ff74-525b-4376-91b3-8ca127d7174a" containerName="nova-manage" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.130293 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b9ff74-525b-4376-91b3-8ca127d7174a" containerName="nova-manage" Mar 13 20:51:30 crc kubenswrapper[5029]: E0313 20:51:30.130306 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21bfc307-8188-473c-8dc6-d24acb8f0694" containerName="dnsmasq-dns" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.130317 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="21bfc307-8188-473c-8dc6-d24acb8f0694" containerName="dnsmasq-dns" Mar 13 20:51:30 crc kubenswrapper[5029]: E0313 20:51:30.130352 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9a1a6da-0bb6-4002-96f3-2b4275db33f0" containerName="nova-cell1-conductor-db-sync" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.130360 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9a1a6da-0bb6-4002-96f3-2b4275db33f0" containerName="nova-cell1-conductor-db-sync" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.130587 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b9ff74-525b-4376-91b3-8ca127d7174a" containerName="nova-manage" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.130608 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="21bfc307-8188-473c-8dc6-d24acb8f0694" containerName="dnsmasq-dns" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.130622 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9a1a6da-0bb6-4002-96f3-2b4275db33f0" containerName="nova-cell1-conductor-db-sync" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.136744 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.149390 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.178826 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.188147 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a883af-abb4-4281-a082-af5d115e022c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.188262 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a883af-abb4-4281-a082-af5d115e022c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.188703 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzwj\" (UniqueName: \"kubernetes.io/projected/81a883af-abb4-4281-a082-af5d115e022c-kube-api-access-ltzwj\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.244213 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.244556 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-log" containerID="cri-o://6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765" gracePeriod=30 Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.244774 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-api" containerID="cri-o://cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171" gracePeriod=30 Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.256730 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.257097 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="512ae667-b970-4cf6-839b-d0d730bbf3a2" containerName="nova-scheduler-scheduler" containerID="cri-o://517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd" gracePeriod=30 Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.291563 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a883af-abb4-4281-a082-af5d115e022c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.291645 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a883af-abb4-4281-a082-af5d115e022c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.291720 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltzwj\" (UniqueName: \"kubernetes.io/projected/81a883af-abb4-4281-a082-af5d115e022c-kube-api-access-ltzwj\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.296325 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a883af-abb4-4281-a082-af5d115e022c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.296901 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a883af-abb4-4281-a082-af5d115e022c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.314404 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltzwj\" (UniqueName: \"kubernetes.io/projected/81a883af-abb4-4281-a082-af5d115e022c-kube-api-access-ltzwj\") pod \"nova-cell1-conductor-0\" (UID: \"81a883af-abb4-4281-a082-af5d115e022c\") " pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:30 crc kubenswrapper[5029]: I0313 20:51:30.460993 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.016016 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.024323 5029 generic.go:334] "Generic (PLEG): container finished" podID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerID="6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765" exitCode=143 Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.024394 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353919e9-7af7-4643-89c1-dd7b66b425e4","Type":"ContainerDied","Data":"6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765"} Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.501234 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.537296 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-combined-ca-bundle\") pod \"512ae667-b970-4cf6-839b-d0d730bbf3a2\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.537475 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-config-data\") pod \"512ae667-b970-4cf6-839b-d0d730bbf3a2\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.537550 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdlpb\" (UniqueName: \"kubernetes.io/projected/512ae667-b970-4cf6-839b-d0d730bbf3a2-kube-api-access-wdlpb\") pod \"512ae667-b970-4cf6-839b-d0d730bbf3a2\" (UID: \"512ae667-b970-4cf6-839b-d0d730bbf3a2\") " Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.548318 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512ae667-b970-4cf6-839b-d0d730bbf3a2-kube-api-access-wdlpb" (OuterVolumeSpecName: "kube-api-access-wdlpb") pod "512ae667-b970-4cf6-839b-d0d730bbf3a2" (UID: "512ae667-b970-4cf6-839b-d0d730bbf3a2"). InnerVolumeSpecName "kube-api-access-wdlpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.569483 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "512ae667-b970-4cf6-839b-d0d730bbf3a2" (UID: "512ae667-b970-4cf6-839b-d0d730bbf3a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.571236 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-config-data" (OuterVolumeSpecName: "config-data") pod "512ae667-b970-4cf6-839b-d0d730bbf3a2" (UID: "512ae667-b970-4cf6-839b-d0d730bbf3a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.639524 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.639570 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdlpb\" (UniqueName: \"kubernetes.io/projected/512ae667-b970-4cf6-839b-d0d730bbf3a2-kube-api-access-wdlpb\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:31 crc kubenswrapper[5029]: I0313 20:51:31.639580 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ae667-b970-4cf6-839b-d0d730bbf3a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.036516 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.036551 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"512ae667-b970-4cf6-839b-d0d730bbf3a2","Type":"ContainerDied","Data":"517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd"} Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.036619 5029 scope.go:117] "RemoveContainer" containerID="517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.036391 5029 generic.go:334] "Generic (PLEG): container finished" podID="512ae667-b970-4cf6-839b-d0d730bbf3a2" containerID="517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd" exitCode=0 Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.051347 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"512ae667-b970-4cf6-839b-d0d730bbf3a2","Type":"ContainerDied","Data":"f77732d41762acd6131a90a3d1392f9d3ff1d3b1b38a17921beccf8845fe5c2d"} Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.055594 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"81a883af-abb4-4281-a082-af5d115e022c","Type":"ContainerStarted","Data":"ca59bd1b6e26605866e28f6da17aad82858e71b191aa56e2f2e0c87cca6f840d"} Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.055919 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"81a883af-abb4-4281-a082-af5d115e022c","Type":"ContainerStarted","Data":"49cc61219f15b86c88b1f9cc2af237f642a3ecd8c82a970b6ab2a02bc6915310"} Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.056328 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.082008 5029 scope.go:117] "RemoveContainer" containerID="517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd" Mar 13 20:51:32 crc kubenswrapper[5029]: E0313 20:51:32.083076 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd\": container with ID starting with 517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd not found: ID does not exist" containerID="517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.083143 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd"} err="failed to get container status \"517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd\": rpc error: code = NotFound desc = could not find container \"517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd\": container with ID starting with 517b14e27fa6662b211754289456084d1612d3a4043bb92325af22de7c32f7dd not found: ID does not exist" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.083333 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.083304408 podStartE2EDuration="2.083304408s" podCreationTimestamp="2026-03-13 20:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:32.076985805 +0000 UTC m=+1452.093068208" watchObservedRunningTime="2026-03-13 20:51:32.083304408 +0000 UTC m=+1452.099386811" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.116928 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.131798 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.143937 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:32 crc kubenswrapper[5029]: E0313 20:51:32.144509 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512ae667-b970-4cf6-839b-d0d730bbf3a2" containerName="nova-scheduler-scheduler" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.144539 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="512ae667-b970-4cf6-839b-d0d730bbf3a2" containerName="nova-scheduler-scheduler" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.144758 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="512ae667-b970-4cf6-839b-d0d730bbf3a2" containerName="nova-scheduler-scheduler" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.145575 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.149035 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.158959 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.254284 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhp9n\" (UniqueName: \"kubernetes.io/projected/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-kube-api-access-qhp9n\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.254661 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.254735 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-config-data\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.357605 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhp9n\" (UniqueName: \"kubernetes.io/projected/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-kube-api-access-qhp9n\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.357678 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.357724 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-config-data\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.363562 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-config-data\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.363783 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.374727 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhp9n\" (UniqueName: \"kubernetes.io/projected/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-kube-api-access-qhp9n\") pod \"nova-scheduler-0\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.473876 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.621287 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512ae667-b970-4cf6-839b-d0d730bbf3a2" path="/var/lib/kubelet/pods/512ae667-b970-4cf6-839b-d0d730bbf3a2/volumes" Mar 13 20:51:32 crc kubenswrapper[5029]: I0313 20:51:32.986249 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:51:32 crc kubenswrapper[5029]: W0313 20:51:32.986279 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb77ad0e_0a71_465a_a2bf_eb94354aa22e.slice/crio-1e1d2c900e6d76436d3fd6d503363e9325832ebaa373a08bb606611a3a9ac160 WatchSource:0}: Error finding container 1e1d2c900e6d76436d3fd6d503363e9325832ebaa373a08bb606611a3a9ac160: Status 404 returned error can't find the container with id 1e1d2c900e6d76436d3fd6d503363e9325832ebaa373a08bb606611a3a9ac160 Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.076573 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cb77ad0e-0a71-465a-a2bf-eb94354aa22e","Type":"ContainerStarted","Data":"1e1d2c900e6d76436d3fd6d503363e9325832ebaa373a08bb606611a3a9ac160"} Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.235813 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.236389 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4" containerName="kube-state-metrics" containerID="cri-o://d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa" gracePeriod=30 Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.244207 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.244261 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.316298 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.316370 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.890649 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.895528 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxklx\" (UniqueName: \"kubernetes.io/projected/ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4-kube-api-access-mxklx\") pod \"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4\" (UID: \"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4\") " Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.904989 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4-kube-api-access-mxklx" (OuterVolumeSpecName: "kube-api-access-mxklx") pod "ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4" (UID: "ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4"). InnerVolumeSpecName "kube-api-access-mxklx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.907438 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:51:33 crc kubenswrapper[5029]: I0313 20:51:33.998708 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxklx\" (UniqueName: \"kubernetes.io/projected/ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4-kube-api-access-mxklx\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.099424 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vk4p\" (UniqueName: \"kubernetes.io/projected/353919e9-7af7-4643-89c1-dd7b66b425e4-kube-api-access-9vk4p\") pod \"353919e9-7af7-4643-89c1-dd7b66b425e4\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.099483 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-combined-ca-bundle\") pod \"353919e9-7af7-4643-89c1-dd7b66b425e4\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.099550 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-config-data\") pod \"353919e9-7af7-4643-89c1-dd7b66b425e4\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.099759 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353919e9-7af7-4643-89c1-dd7b66b425e4-logs\") pod \"353919e9-7af7-4643-89c1-dd7b66b425e4\" (UID: \"353919e9-7af7-4643-89c1-dd7b66b425e4\") " Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.100809 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/353919e9-7af7-4643-89c1-dd7b66b425e4-logs" (OuterVolumeSpecName: "logs") pod "353919e9-7af7-4643-89c1-dd7b66b425e4" (UID: "353919e9-7af7-4643-89c1-dd7b66b425e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.104685 5029 generic.go:334] "Generic (PLEG): container finished" podID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerID="cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171" exitCode=0 Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.104793 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353919e9-7af7-4643-89c1-dd7b66b425e4","Type":"ContainerDied","Data":"cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171"} Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.104823 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"353919e9-7af7-4643-89c1-dd7b66b425e4","Type":"ContainerDied","Data":"38459d4f854c53b0b32c8b3963f34314d84a5eb2f97f3fd0aedc41f5aba46c3b"} Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.104841 5029 scope.go:117] "RemoveContainer" containerID="cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.104995 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.119449 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cb77ad0e-0a71-465a-a2bf-eb94354aa22e","Type":"ContainerStarted","Data":"c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19"} Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.121080 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353919e9-7af7-4643-89c1-dd7b66b425e4-kube-api-access-9vk4p" (OuterVolumeSpecName: "kube-api-access-9vk4p") pod "353919e9-7af7-4643-89c1-dd7b66b425e4" (UID: "353919e9-7af7-4643-89c1-dd7b66b425e4"). InnerVolumeSpecName "kube-api-access-9vk4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.133260 5029 generic.go:334] "Generic (PLEG): container finished" podID="ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4" containerID="d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa" exitCode=2 Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.133317 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4","Type":"ContainerDied","Data":"d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa"} Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.133353 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4","Type":"ContainerDied","Data":"a903f70c0faed9cca0a1414b50cc390bca25ef8ea45e9efd4efdd3b7b7f05d6d"} Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.133444 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.142043 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-config-data" (OuterVolumeSpecName: "config-data") pod "353919e9-7af7-4643-89c1-dd7b66b425e4" (UID: "353919e9-7af7-4643-89c1-dd7b66b425e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.152064 5029 scope.go:117] "RemoveContainer" containerID="6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.152478 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "353919e9-7af7-4643-89c1-dd7b66b425e4" (UID: "353919e9-7af7-4643-89c1-dd7b66b425e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.168574 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.1685554910000002 podStartE2EDuration="2.168555491s" podCreationTimestamp="2026-03-13 20:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:34.140037584 +0000 UTC m=+1454.156119987" watchObservedRunningTime="2026-03-13 20:51:34.168555491 +0000 UTC m=+1454.184637894" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.188577 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.191049 5029 scope.go:117] "RemoveContainer" containerID="cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171" Mar 13 20:51:34 crc kubenswrapper[5029]: E0313 20:51:34.191506 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171\": container with ID starting with cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171 not found: ID does not exist" containerID="cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.191545 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171"} err="failed to get container status \"cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171\": rpc error: code = NotFound desc = could not find container \"cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171\": container with ID starting with cf7f5eeb94dbe1f1083b69de1264294daff9225ef06512c69a33bdbff72a8171 not found: ID does not exist" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.191573 5029 scope.go:117] "RemoveContainer" containerID="6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765" Mar 13 20:51:34 crc kubenswrapper[5029]: E0313 20:51:34.195236 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765\": container with ID starting with 6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765 not found: ID does not exist" containerID="6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.195278 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765"} err="failed to get container status \"6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765\": rpc error: code = NotFound desc = could not find container \"6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765\": container with ID starting with 6a496d3cc5bee1b7dbf56074c32e1e777c96baa3220e0dd341484e832411e765 not found: ID does not exist" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.195307 5029 scope.go:117] "RemoveContainer" containerID="d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.204199 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vk4p\" (UniqueName: \"kubernetes.io/projected/353919e9-7af7-4643-89c1-dd7b66b425e4-kube-api-access-9vk4p\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.204240 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.204251 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353919e9-7af7-4643-89c1-dd7b66b425e4-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.204262 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353919e9-7af7-4643-89c1-dd7b66b425e4-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.212285 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.226220 5029 scope.go:117] "RemoveContainer" containerID="d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa" Mar 13 20:51:34 crc kubenswrapper[5029]: E0313 20:51:34.227092 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa\": container with ID starting with d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa not found: ID does not exist" containerID="d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.227122 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa"} err="failed to get container status \"d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa\": rpc error: code = NotFound desc = could not find container \"d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa\": container with ID starting with d890b68bfae00d5731f4a0d5b76121b318e282369e733972904f1efc2267e9fa not found: ID does not exist" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.232923 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:51:34 crc kubenswrapper[5029]: E0313 20:51:34.233390 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-log" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.233408 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-log" Mar 13 20:51:34 crc kubenswrapper[5029]: E0313 20:51:34.233446 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-api" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.233453 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-api" Mar 13 20:51:34 crc kubenswrapper[5029]: E0313 20:51:34.233475 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4" containerName="kube-state-metrics" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.233482 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4" containerName="kube-state-metrics" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.233661 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4" containerName="kube-state-metrics" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.233677 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-api" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.233695 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" containerName="nova-api-log" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.234433 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.238278 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.238926 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.289929 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.307610 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.307692 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstz8\" (UniqueName: \"kubernetes.io/projected/1b04bebb-7126-472e-bfdc-f106f0190626-kube-api-access-nstz8\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.307875 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.307971 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.409728 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.409806 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nstz8\" (UniqueName: \"kubernetes.io/projected/1b04bebb-7126-472e-bfdc-f106f0190626-kube-api-access-nstz8\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.410285 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.410512 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.415273 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.416712 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.422710 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1b04bebb-7126-472e-bfdc-f106f0190626-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.434389 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nstz8\" (UniqueName: \"kubernetes.io/projected/1b04bebb-7126-472e-bfdc-f106f0190626-kube-api-access-nstz8\") pod \"kube-state-metrics-0\" (UID: \"1b04bebb-7126-472e-bfdc-f106f0190626\") " pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.462090 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.471823 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.486170 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.487928 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.492046 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.497313 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.513015 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-config-data\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.513099 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqpd\" (UniqueName: \"kubernetes.io/projected/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-kube-api-access-xlqpd\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.513142 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-logs\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.513235 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.599212 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.616386 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-logs\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.616658 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.616793 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-config-data\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.616901 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlqpd\" (UniqueName: \"kubernetes.io/projected/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-kube-api-access-xlqpd\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.617359 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-logs\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.621948 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.627596 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353919e9-7af7-4643-89c1-dd7b66b425e4" path="/var/lib/kubelet/pods/353919e9-7af7-4643-89c1-dd7b66b425e4/volumes" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.628579 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4" path="/var/lib/kubelet/pods/ab983f1f-460d-45ac-b8e5-7ccf3e5cdfe4/volumes" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.636148 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlqpd\" (UniqueName: \"kubernetes.io/projected/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-kube-api-access-xlqpd\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.645755 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-config-data\") pod \"nova-api-0\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " pod="openstack/nova-api-0" Mar 13 20:51:34 crc kubenswrapper[5029]: I0313 20:51:34.845348 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:51:35 crc kubenswrapper[5029]: I0313 20:51:35.163461 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 13 20:51:35 crc kubenswrapper[5029]: W0313 20:51:35.177634 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b04bebb_7126_472e_bfdc_f106f0190626.slice/crio-a65f54770769135b57a2217801da68a4f2adff22dc46b65448618130f93b7867 WatchSource:0}: Error finding container a65f54770769135b57a2217801da68a4f2adff22dc46b65448618130f93b7867: Status 404 returned error can't find the container with id a65f54770769135b57a2217801da68a4f2adff22dc46b65448618130f93b7867 Mar 13 20:51:35 crc kubenswrapper[5029]: I0313 20:51:35.362452 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:35 crc kubenswrapper[5029]: W0313 20:51:35.372224 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab47deae_b3a8_4cde_9aa1_8c7a92ef1da5.slice/crio-e02cfd3ef64976cf64b8de0ff0768ab929686394276cacdb018ba8c6f141db74 WatchSource:0}: Error finding container e02cfd3ef64976cf64b8de0ff0768ab929686394276cacdb018ba8c6f141db74: Status 404 returned error can't find the container with id e02cfd3ef64976cf64b8de0ff0768ab929686394276cacdb018ba8c6f141db74 Mar 13 20:51:35 crc kubenswrapper[5029]: I0313 20:51:35.960569 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:51:35 crc kubenswrapper[5029]: I0313 20:51:35.961404 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="ceilometer-central-agent" containerID="cri-o://7501ce3787ad259910a4dd5f014c8bfed9a1b2645235474f868461e4e63ef402" gracePeriod=30 Mar 13 20:51:35 crc kubenswrapper[5029]: I0313 20:51:35.961446 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="proxy-httpd" containerID="cri-o://924fff1b8ed5903434680277adb181b7483b716961296b65e37def4eb3e1ab15" gracePeriod=30 Mar 13 20:51:35 crc kubenswrapper[5029]: I0313 20:51:35.961462 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="sg-core" containerID="cri-o://cf84df7733c24482cd3931d0f5871c77b07206f191412540c9fedfeeb421f913" gracePeriod=30 Mar 13 20:51:35 crc kubenswrapper[5029]: I0313 20:51:35.961565 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="ceilometer-notification-agent" containerID="cri-o://45b72e2b0db3c5b81f5c7582f6e0ea7b35e46f79855b2ad9bfba82d06c890d63" gracePeriod=30 Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.162486 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1b04bebb-7126-472e-bfdc-f106f0190626","Type":"ContainerStarted","Data":"2f013f5c74688ac883e1225ab61c1e5887ac79387d8f04efbb7365ff97099158"} Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.163044 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.163056 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1b04bebb-7126-472e-bfdc-f106f0190626","Type":"ContainerStarted","Data":"a65f54770769135b57a2217801da68a4f2adff22dc46b65448618130f93b7867"} Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.165885 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5","Type":"ContainerStarted","Data":"6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a"} Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.165952 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5","Type":"ContainerStarted","Data":"b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa"} Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.165974 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5","Type":"ContainerStarted","Data":"e02cfd3ef64976cf64b8de0ff0768ab929686394276cacdb018ba8c6f141db74"} Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.170426 5029 generic.go:334] "Generic (PLEG): container finished" podID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerID="cf84df7733c24482cd3931d0f5871c77b07206f191412540c9fedfeeb421f913" exitCode=2 Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.170499 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerDied","Data":"cf84df7733c24482cd3931d0f5871c77b07206f191412540c9fedfeeb421f913"} Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.180444 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.655346299 podStartE2EDuration="2.180404963s" podCreationTimestamp="2026-03-13 20:51:34 +0000 UTC" firstStartedPulling="2026-03-13 20:51:35.181510943 +0000 UTC m=+1455.197593346" lastFinishedPulling="2026-03-13 20:51:35.706569607 +0000 UTC m=+1455.722652010" observedRunningTime="2026-03-13 20:51:36.179635463 +0000 UTC m=+1456.195717886" watchObservedRunningTime="2026-03-13 20:51:36.180404963 +0000 UTC m=+1456.196487366" Mar 13 20:51:36 crc kubenswrapper[5029]: I0313 20:51:36.218733 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.218706438 podStartE2EDuration="2.218706438s" podCreationTimestamp="2026-03-13 20:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:36.210030702 +0000 UTC m=+1456.226113105" watchObservedRunningTime="2026-03-13 20:51:36.218706438 +0000 UTC m=+1456.234788831" Mar 13 20:51:37 crc kubenswrapper[5029]: I0313 20:51:37.186289 5029 generic.go:334] "Generic (PLEG): container finished" podID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerID="924fff1b8ed5903434680277adb181b7483b716961296b65e37def4eb3e1ab15" exitCode=0 Mar 13 20:51:37 crc kubenswrapper[5029]: I0313 20:51:37.186889 5029 generic.go:334] "Generic (PLEG): container finished" podID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerID="7501ce3787ad259910a4dd5f014c8bfed9a1b2645235474f868461e4e63ef402" exitCode=0 Mar 13 20:51:37 crc kubenswrapper[5029]: I0313 20:51:37.186381 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerDied","Data":"924fff1b8ed5903434680277adb181b7483b716961296b65e37def4eb3e1ab15"} Mar 13 20:51:37 crc kubenswrapper[5029]: I0313 20:51:37.186967 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerDied","Data":"7501ce3787ad259910a4dd5f014c8bfed9a1b2645235474f868461e4e63ef402"} Mar 13 20:51:37 crc kubenswrapper[5029]: I0313 20:51:37.474543 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.199042 5029 generic.go:334] "Generic (PLEG): container finished" podID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerID="45b72e2b0db3c5b81f5c7582f6e0ea7b35e46f79855b2ad9bfba82d06c890d63" exitCode=0 Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.199109 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerDied","Data":"45b72e2b0db3c5b81f5c7582f6e0ea7b35e46f79855b2ad9bfba82d06c890d63"} Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.199407 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2bb885a-b58e-4f5b-994b-0c676f0e78ab","Type":"ContainerDied","Data":"ffd0f66b0505413e3d73d78033948d67c79b87d7b39456b1229b19f9e0989b8e"} Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.199438 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd0f66b0505413e3d73d78033948d67c79b87d7b39456b1229b19f9e0989b8e" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.221104 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.312424 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-config-data\") pod \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.313135 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jj9m\" (UniqueName: \"kubernetes.io/projected/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-kube-api-access-8jj9m\") pod \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.314047 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-combined-ca-bundle\") pod \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.314247 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-scripts\") pod \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.314314 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-run-httpd\") pod \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.314628 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-log-httpd\") pod \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.314666 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-sg-core-conf-yaml\") pod \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\" (UID: \"f2bb885a-b58e-4f5b-994b-0c676f0e78ab\") " Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.315495 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2bb885a-b58e-4f5b-994b-0c676f0e78ab" (UID: "f2bb885a-b58e-4f5b-994b-0c676f0e78ab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.316003 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.316006 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2bb885a-b58e-4f5b-994b-0c676f0e78ab" (UID: "f2bb885a-b58e-4f5b-994b-0c676f0e78ab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.338702 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-kube-api-access-8jj9m" (OuterVolumeSpecName: "kube-api-access-8jj9m") pod "f2bb885a-b58e-4f5b-994b-0c676f0e78ab" (UID: "f2bb885a-b58e-4f5b-994b-0c676f0e78ab"). InnerVolumeSpecName "kube-api-access-8jj9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.355534 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-scripts" (OuterVolumeSpecName: "scripts") pod "f2bb885a-b58e-4f5b-994b-0c676f0e78ab" (UID: "f2bb885a-b58e-4f5b-994b-0c676f0e78ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.357985 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2bb885a-b58e-4f5b-994b-0c676f0e78ab" (UID: "f2bb885a-b58e-4f5b-994b-0c676f0e78ab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.418357 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.418407 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.418422 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jj9m\" (UniqueName: \"kubernetes.io/projected/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-kube-api-access-8jj9m\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.418435 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.440530 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2bb885a-b58e-4f5b-994b-0c676f0e78ab" (UID: "f2bb885a-b58e-4f5b-994b-0c676f0e78ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.453064 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-config-data" (OuterVolumeSpecName: "config-data") pod "f2bb885a-b58e-4f5b-994b-0c676f0e78ab" (UID: "f2bb885a-b58e-4f5b-994b-0c676f0e78ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.520096 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:38 crc kubenswrapper[5029]: I0313 20:51:38.520135 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bb885a-b58e-4f5b-994b-0c676f0e78ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.211473 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.239364 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.249255 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.275190 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:51:39 crc kubenswrapper[5029]: E0313 20:51:39.275605 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="proxy-httpd" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.275625 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="proxy-httpd" Mar 13 20:51:39 crc kubenswrapper[5029]: E0313 20:51:39.275658 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="ceilometer-notification-agent" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.275667 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="ceilometer-notification-agent" Mar 13 20:51:39 crc kubenswrapper[5029]: E0313 20:51:39.275691 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="ceilometer-central-agent" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.275698 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="ceilometer-central-agent" Mar 13 20:51:39 crc kubenswrapper[5029]: E0313 20:51:39.275708 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="sg-core" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.275714 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="sg-core" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.275979 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="ceilometer-notification-agent" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.276003 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="proxy-httpd" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.276019 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="sg-core" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.276046 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" containerName="ceilometer-central-agent" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.277847 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.280335 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.280412 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.282177 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.301398 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.441196 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vbc\" (UniqueName: \"kubernetes.io/projected/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-kube-api-access-82vbc\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.441409 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-scripts\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.441500 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.441531 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.441609 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-run-httpd\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.441689 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-config-data\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.441883 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-log-httpd\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.441945 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.543771 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.543809 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.543840 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-run-httpd\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.543886 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-config-data\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.543953 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-log-httpd\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.543980 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.544042 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82vbc\" (UniqueName: \"kubernetes.io/projected/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-kube-api-access-82vbc\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.544087 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-scripts\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.544955 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-run-httpd\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.545087 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-log-httpd\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.549797 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.550402 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-scripts\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.556929 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.558652 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.559549 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-config-data\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.575651 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82vbc\" (UniqueName: \"kubernetes.io/projected/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-kube-api-access-82vbc\") pod \"ceilometer-0\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " pod="openstack/ceilometer-0" Mar 13 20:51:39 crc kubenswrapper[5029]: I0313 20:51:39.596546 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:51:40 crc kubenswrapper[5029]: I0313 20:51:40.088595 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:51:40 crc kubenswrapper[5029]: I0313 20:51:40.223676 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerStarted","Data":"2738a8352ec5eebecd2a54fd5ff6f5a3a78243a65a1cf76a65602c0aae12d613"} Mar 13 20:51:40 crc kubenswrapper[5029]: I0313 20:51:40.498387 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 13 20:51:40 crc kubenswrapper[5029]: I0313 20:51:40.614700 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bb885a-b58e-4f5b-994b-0c676f0e78ab" path="/var/lib/kubelet/pods/f2bb885a-b58e-4f5b-994b-0c676f0e78ab/volumes" Mar 13 20:51:41 crc kubenswrapper[5029]: I0313 20:51:41.259563 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerStarted","Data":"23e6bc9c14534d6f845db8d15e20e317e8b796d307edb1dc1563d6aedbeb1c7f"} Mar 13 20:51:42 crc kubenswrapper[5029]: I0313 20:51:42.276656 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerStarted","Data":"fc00b7ad637e894ceaa0c64535b85769625e77d04595ace0d589dffb9a5ad277"} Mar 13 20:51:42 crc kubenswrapper[5029]: I0313 20:51:42.474110 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 20:51:42 crc kubenswrapper[5029]: I0313 20:51:42.503258 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 20:51:43 crc kubenswrapper[5029]: I0313 20:51:43.287963 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerStarted","Data":"d13d100eab9cd59b20012e91b029d89372d35a43073cc9fdc21772af64f2a8cc"} Mar 13 20:51:43 crc kubenswrapper[5029]: I0313 20:51:43.320124 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 20:51:44 crc kubenswrapper[5029]: I0313 20:51:44.627342 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 13 20:51:44 crc kubenswrapper[5029]: I0313 20:51:44.846303 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 20:51:44 crc kubenswrapper[5029]: I0313 20:51:44.846638 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 20:51:45 crc kubenswrapper[5029]: I0313 20:51:45.312795 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerStarted","Data":"9b03f9203983dd44ef5a7ad30a99c5c4f3234fd0b6694fe439326b4bfc63c1b7"} Mar 13 20:51:45 crc kubenswrapper[5029]: I0313 20:51:45.313413 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 20:51:45 crc kubenswrapper[5029]: I0313 20:51:45.343294 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.399906115 podStartE2EDuration="6.343272491s" podCreationTimestamp="2026-03-13 20:51:39 +0000 UTC" firstStartedPulling="2026-03-13 20:51:40.096782482 +0000 UTC m=+1460.112864885" lastFinishedPulling="2026-03-13 20:51:44.040148848 +0000 UTC m=+1464.056231261" observedRunningTime="2026-03-13 20:51:45.333021211 +0000 UTC m=+1465.349103614" watchObservedRunningTime="2026-03-13 20:51:45.343272491 +0000 UTC m=+1465.359354894" Mar 13 20:51:45 crc kubenswrapper[5029]: I0313 20:51:45.930097 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 20:51:45 crc kubenswrapper[5029]: I0313 20:51:45.930120 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 20:51:52 crc kubenswrapper[5029]: I0313 20:51:52.845530 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 20:51:52 crc kubenswrapper[5029]: I0313 20:51:52.846556 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.431641 5029 generic.go:334] "Generic (PLEG): container finished" podID="bac0608b-f7a8-45e2-9dae-a5cd1623f6db" containerID="b99729606e95ff8ba7c3943c0cd738121474ba8d694f5297382de70bb267ac8c" exitCode=137 Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.432535 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bac0608b-f7a8-45e2-9dae-a5cd1623f6db","Type":"ContainerDied","Data":"b99729606e95ff8ba7c3943c0cd738121474ba8d694f5297382de70bb267ac8c"} Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.432591 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bac0608b-f7a8-45e2-9dae-a5cd1623f6db","Type":"ContainerDied","Data":"d5585615df1b4aa26cf1795d9080b7374ac0eb3b393fca340e4f15d808f34bc0"} Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.432608 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5585615df1b4aa26cf1795d9080b7374ac0eb3b393fca340e4f15d808f34bc0" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.443373 5029 generic.go:334] "Generic (PLEG): container finished" podID="c6343032-288c-493a-9a01-2595afb05818" containerID="2113b4c7052640b04abaeac4661ed3b21b11aa11f8963cd632dfbbc21d79a667" exitCode=137 Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.443460 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6343032-288c-493a-9a01-2595afb05818","Type":"ContainerDied","Data":"2113b4c7052640b04abaeac4661ed3b21b11aa11f8963cd632dfbbc21d79a667"} Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.443825 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6343032-288c-493a-9a01-2595afb05818","Type":"ContainerDied","Data":"fffc6e2a6ef1634fc40609d658e28b15f96360bd411588885b9cbf0fc878874b"} Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.443970 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fffc6e2a6ef1634fc40609d658e28b15f96360bd411588885b9cbf0fc878874b" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.446486 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.452917 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.582078 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-combined-ca-bundle\") pod \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.582176 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-combined-ca-bundle\") pod \"c6343032-288c-493a-9a01-2595afb05818\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.582213 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-config-data\") pod \"c6343032-288c-493a-9a01-2595afb05818\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.582261 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n5lt\" (UniqueName: \"kubernetes.io/projected/c6343032-288c-493a-9a01-2595afb05818-kube-api-access-9n5lt\") pod \"c6343032-288c-493a-9a01-2595afb05818\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.582344 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-config-data\") pod \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.582415 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6343032-288c-493a-9a01-2595afb05818-logs\") pod \"c6343032-288c-493a-9a01-2595afb05818\" (UID: \"c6343032-288c-493a-9a01-2595afb05818\") " Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.582476 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tqdh\" (UniqueName: \"kubernetes.io/projected/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-kube-api-access-4tqdh\") pod \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\" (UID: \"bac0608b-f7a8-45e2-9dae-a5cd1623f6db\") " Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.586728 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6343032-288c-493a-9a01-2595afb05818-logs" (OuterVolumeSpecName: "logs") pod "c6343032-288c-493a-9a01-2595afb05818" (UID: "c6343032-288c-493a-9a01-2595afb05818"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.590369 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6343032-288c-493a-9a01-2595afb05818-kube-api-access-9n5lt" (OuterVolumeSpecName: "kube-api-access-9n5lt") pod "c6343032-288c-493a-9a01-2595afb05818" (UID: "c6343032-288c-493a-9a01-2595afb05818"). InnerVolumeSpecName "kube-api-access-9n5lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.600165 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-kube-api-access-4tqdh" (OuterVolumeSpecName: "kube-api-access-4tqdh") pod "bac0608b-f7a8-45e2-9dae-a5cd1623f6db" (UID: "bac0608b-f7a8-45e2-9dae-a5cd1623f6db"). InnerVolumeSpecName "kube-api-access-4tqdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.613260 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-config-data" (OuterVolumeSpecName: "config-data") pod "bac0608b-f7a8-45e2-9dae-a5cd1623f6db" (UID: "bac0608b-f7a8-45e2-9dae-a5cd1623f6db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.623060 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-config-data" (OuterVolumeSpecName: "config-data") pod "c6343032-288c-493a-9a01-2595afb05818" (UID: "c6343032-288c-493a-9a01-2595afb05818"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.624055 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6343032-288c-493a-9a01-2595afb05818" (UID: "c6343032-288c-493a-9a01-2595afb05818"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.627737 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bac0608b-f7a8-45e2-9dae-a5cd1623f6db" (UID: "bac0608b-f7a8-45e2-9dae-a5cd1623f6db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.685458 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.685759 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.685840 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6343032-288c-493a-9a01-2595afb05818-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.685937 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n5lt\" (UniqueName: \"kubernetes.io/projected/c6343032-288c-493a-9a01-2595afb05818-kube-api-access-9n5lt\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.686152 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.686232 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6343032-288c-493a-9a01-2595afb05818-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:53 crc kubenswrapper[5029]: I0313 20:51:53.686309 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tqdh\" (UniqueName: \"kubernetes.io/projected/bac0608b-f7a8-45e2-9dae-a5cd1623f6db-kube-api-access-4tqdh\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.459919 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.460061 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.512489 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.523672 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.547125 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.564960 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.583012 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:54 crc kubenswrapper[5029]: E0313 20:51:54.583550 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac0608b-f7a8-45e2-9dae-a5cd1623f6db" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.583566 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac0608b-f7a8-45e2-9dae-a5cd1623f6db" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 20:51:54 crc kubenswrapper[5029]: E0313 20:51:54.583587 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6343032-288c-493a-9a01-2595afb05818" containerName="nova-metadata-metadata" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.583593 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6343032-288c-493a-9a01-2595afb05818" containerName="nova-metadata-metadata" Mar 13 20:51:54 crc kubenswrapper[5029]: E0313 20:51:54.583602 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6343032-288c-493a-9a01-2595afb05818" containerName="nova-metadata-log" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.583610 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6343032-288c-493a-9a01-2595afb05818" containerName="nova-metadata-log" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.583797 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac0608b-f7a8-45e2-9dae-a5cd1623f6db" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.583814 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6343032-288c-493a-9a01-2595afb05818" containerName="nova-metadata-metadata" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.583834 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6343032-288c-493a-9a01-2595afb05818" containerName="nova-metadata-log" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.584907 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.590604 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.590715 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.627974 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac0608b-f7a8-45e2-9dae-a5cd1623f6db" path="/var/lib/kubelet/pods/bac0608b-f7a8-45e2-9dae-a5cd1623f6db/volumes" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.629097 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6343032-288c-493a-9a01-2595afb05818" path="/var/lib/kubelet/pods/c6343032-288c-493a-9a01-2595afb05818/volumes" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.629892 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.641921 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.643993 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.647177 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.648444 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.651495 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.652499 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.712761 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-logs\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.712971 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-config-data\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.713011 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxzn\" (UniqueName: \"kubernetes.io/projected/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-kube-api-access-9nxzn\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.713269 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.713320 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.816446 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.816518 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.816564 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.816606 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.816627 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gk79\" (UniqueName: \"kubernetes.io/projected/78dac452-38e6-4307-b8ec-097bb5c99654-kube-api-access-8gk79\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.816704 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-logs\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.816727 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.817426 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-logs\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.817133 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-config-data\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.817815 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxzn\" (UniqueName: \"kubernetes.io/projected/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-kube-api-access-9nxzn\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.817920 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.824486 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.825728 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-config-data\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.826498 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.841659 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxzn\" (UniqueName: \"kubernetes.io/projected/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-kube-api-access-9nxzn\") pod \"nova-metadata-0\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.850172 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.855089 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.855892 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.910728 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.921304 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.921475 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.921669 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.921744 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.921770 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gk79\" (UniqueName: \"kubernetes.io/projected/78dac452-38e6-4307-b8ec-097bb5c99654-kube-api-access-8gk79\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.926538 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.928025 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.929023 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.930591 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78dac452-38e6-4307-b8ec-097bb5c99654-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.949821 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gk79\" (UniqueName: \"kubernetes.io/projected/78dac452-38e6-4307-b8ec-097bb5c99654-kube-api-access-8gk79\") pod \"nova-cell1-novncproxy-0\" (UID: \"78dac452-38e6-4307-b8ec-097bb5c99654\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:54 crc kubenswrapper[5029]: I0313 20:51:54.970792 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.487247 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.495342 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78dac452-38e6-4307-b8ec-097bb5c99654","Type":"ContainerStarted","Data":"45f9260b8cafd3e5e66156db17290818d79bd6cd2ccb81b4183529e8b4ca8824"} Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.501710 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 20:51:55 crc kubenswrapper[5029]: W0313 20:51:55.717082 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda313bf88_42e1_4ce1_98e4_2b5fab75ec6d.slice/crio-0efca41f12e7905b7ab2f585b9772698ce50d3f72d5bed45cd5bc00c4ae537eb WatchSource:0}: Error finding container 0efca41f12e7905b7ab2f585b9772698ce50d3f72d5bed45cd5bc00c4ae537eb: Status 404 returned error can't find the container with id 0efca41f12e7905b7ab2f585b9772698ce50d3f72d5bed45cd5bc00c4ae537eb Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.718766 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.755764 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-cmdg5"] Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.757804 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.769031 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-cmdg5"] Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.921017 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw8fx\" (UniqueName: \"kubernetes.io/projected/0474bc88-da72-4731-85a5-bc2b32263a20-kube-api-access-xw8fx\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.921138 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.921201 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.921229 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.921264 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-svc\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:55 crc kubenswrapper[5029]: I0313 20:51:55.921316 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-config\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.024053 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.024191 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.024238 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.024298 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-svc\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.024376 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-config\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.024470 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw8fx\" (UniqueName: \"kubernetes.io/projected/0474bc88-da72-4731-85a5-bc2b32263a20-kube-api-access-xw8fx\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.025059 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.025647 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.028291 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-config\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.028540 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-svc\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.030691 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.063951 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw8fx\" (UniqueName: \"kubernetes.io/projected/0474bc88-da72-4731-85a5-bc2b32263a20-kube-api-access-xw8fx\") pod \"dnsmasq-dns-5b4c997d87-cmdg5\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.118530 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.514782 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78dac452-38e6-4307-b8ec-097bb5c99654","Type":"ContainerStarted","Data":"e6e5cee279694bd48a8502d3f7427a7c4de44679da6e93f1c1fb21722d295b50"} Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.524813 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d","Type":"ContainerStarted","Data":"9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd"} Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.524901 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d","Type":"ContainerStarted","Data":"59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449"} Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.524916 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d","Type":"ContainerStarted","Data":"0efca41f12e7905b7ab2f585b9772698ce50d3f72d5bed45cd5bc00c4ae537eb"} Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.554183 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.554161058 podStartE2EDuration="2.554161058s" podCreationTimestamp="2026-03-13 20:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:56.545413989 +0000 UTC m=+1476.561496392" watchObservedRunningTime="2026-03-13 20:51:56.554161058 +0000 UTC m=+1476.570243461" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.600571 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.600542632 podStartE2EDuration="2.600542632s" podCreationTimestamp="2026-03-13 20:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:56.592329688 +0000 UTC m=+1476.608412091" watchObservedRunningTime="2026-03-13 20:51:56.600542632 +0000 UTC m=+1476.616625035" Mar 13 20:51:56 crc kubenswrapper[5029]: I0313 20:51:56.716306 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-cmdg5"] Mar 13 20:51:56 crc kubenswrapper[5029]: W0313 20:51:56.718600 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0474bc88_da72_4731_85a5_bc2b32263a20.slice/crio-00f2a14f365f2a62c5f8456be4a2ae25741b8d809c8549066a65abc66fe37036 WatchSource:0}: Error finding container 00f2a14f365f2a62c5f8456be4a2ae25741b8d809c8549066a65abc66fe37036: Status 404 returned error can't find the container with id 00f2a14f365f2a62c5f8456be4a2ae25741b8d809c8549066a65abc66fe37036 Mar 13 20:51:57 crc kubenswrapper[5029]: I0313 20:51:57.536131 5029 generic.go:334] "Generic (PLEG): container finished" podID="0474bc88-da72-4731-85a5-bc2b32263a20" containerID="329805cef154718ed329ebc7408a17f56118c8e2e3b502cdcbe2d476ce94279f" exitCode=0 Mar 13 20:51:57 crc kubenswrapper[5029]: I0313 20:51:57.536264 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" event={"ID":"0474bc88-da72-4731-85a5-bc2b32263a20","Type":"ContainerDied","Data":"329805cef154718ed329ebc7408a17f56118c8e2e3b502cdcbe2d476ce94279f"} Mar 13 20:51:57 crc kubenswrapper[5029]: I0313 20:51:57.536794 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" event={"ID":"0474bc88-da72-4731-85a5-bc2b32263a20","Type":"ContainerStarted","Data":"00f2a14f365f2a62c5f8456be4a2ae25741b8d809c8549066a65abc66fe37036"} Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.263034 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.269016 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="ceilometer-notification-agent" containerID="cri-o://fc00b7ad637e894ceaa0c64535b85769625e77d04595ace0d589dffb9a5ad277" gracePeriod=30 Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.269045 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="proxy-httpd" containerID="cri-o://9b03f9203983dd44ef5a7ad30a99c5c4f3234fd0b6694fe439326b4bfc63c1b7" gracePeriod=30 Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.269212 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="ceilometer-central-agent" containerID="cri-o://23e6bc9c14534d6f845db8d15e20e317e8b796d307edb1dc1563d6aedbeb1c7f" gracePeriod=30 Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.269030 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="sg-core" containerID="cri-o://d13d100eab9cd59b20012e91b029d89372d35a43073cc9fdc21772af64f2a8cc" gracePeriod=30 Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.279024 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.219:3000/\": EOF" Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.641791 5029 generic.go:334] "Generic (PLEG): container finished" podID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerID="9b03f9203983dd44ef5a7ad30a99c5c4f3234fd0b6694fe439326b4bfc63c1b7" exitCode=0 Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.641892 5029 generic.go:334] "Generic (PLEG): container finished" podID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerID="d13d100eab9cd59b20012e91b029d89372d35a43073cc9fdc21772af64f2a8cc" exitCode=2 Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.643629 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerDied","Data":"9b03f9203983dd44ef5a7ad30a99c5c4f3234fd0b6694fe439326b4bfc63c1b7"} Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.643696 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerDied","Data":"d13d100eab9cd59b20012e91b029d89372d35a43073cc9fdc21772af64f2a8cc"} Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.646709 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" event={"ID":"0474bc88-da72-4731-85a5-bc2b32263a20","Type":"ContainerStarted","Data":"d8150ecfe1ed35ac0395d3a00b463aa687ab2fe55fe423e61fafbf4f78c68cc3"} Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.649752 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.685400 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.685732 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-log" containerID="cri-o://b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa" gracePeriod=30 Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.685957 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-api" containerID="cri-o://6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a" gracePeriod=30 Mar 13 20:51:58 crc kubenswrapper[5029]: I0313 20:51:58.712875 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" podStartSLOduration=3.712827012 podStartE2EDuration="3.712827012s" podCreationTimestamp="2026-03-13 20:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:51:58.699946272 +0000 UTC m=+1478.716028675" watchObservedRunningTime="2026-03-13 20:51:58.712827012 +0000 UTC m=+1478.728909415" Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.677114 5029 generic.go:334] "Generic (PLEG): container finished" podID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerID="b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa" exitCode=143 Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.677138 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5","Type":"ContainerDied","Data":"b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa"} Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.682770 5029 generic.go:334] "Generic (PLEG): container finished" podID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerID="fc00b7ad637e894ceaa0c64535b85769625e77d04595ace0d589dffb9a5ad277" exitCode=0 Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.682818 5029 generic.go:334] "Generic (PLEG): container finished" podID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerID="23e6bc9c14534d6f845db8d15e20e317e8b796d307edb1dc1563d6aedbeb1c7f" exitCode=0 Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.682863 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerDied","Data":"fc00b7ad637e894ceaa0c64535b85769625e77d04595ace0d589dffb9a5ad277"} Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.682924 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerDied","Data":"23e6bc9c14534d6f845db8d15e20e317e8b796d307edb1dc1563d6aedbeb1c7f"} Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.787840 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.971681 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.972329 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-ceilometer-tls-certs\") pod \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.972410 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-run-httpd\") pod \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.972960 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc4d0fc2-3316-48d5-af5a-3b62d86519bf" (UID: "dc4d0fc2-3316-48d5-af5a-3b62d86519bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.972959 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-scripts\") pod \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.973088 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-config-data\") pod \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.973143 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-sg-core-conf-yaml\") pod \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.973181 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-log-httpd\") pod \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.973241 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82vbc\" (UniqueName: \"kubernetes.io/projected/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-kube-api-access-82vbc\") pod \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.973323 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-combined-ca-bundle\") pod \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\" (UID: \"dc4d0fc2-3316-48d5-af5a-3b62d86519bf\") " Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.973872 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc4d0fc2-3316-48d5-af5a-3b62d86519bf" (UID: "dc4d0fc2-3316-48d5-af5a-3b62d86519bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.977809 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:51:59 crc kubenswrapper[5029]: I0313 20:51:59.978116 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.000023 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-kube-api-access-82vbc" (OuterVolumeSpecName: "kube-api-access-82vbc") pod "dc4d0fc2-3316-48d5-af5a-3b62d86519bf" (UID: "dc4d0fc2-3316-48d5-af5a-3b62d86519bf"). InnerVolumeSpecName "kube-api-access-82vbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.016665 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc4d0fc2-3316-48d5-af5a-3b62d86519bf" (UID: "dc4d0fc2-3316-48d5-af5a-3b62d86519bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.033109 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-scripts" (OuterVolumeSpecName: "scripts") pod "dc4d0fc2-3316-48d5-af5a-3b62d86519bf" (UID: "dc4d0fc2-3316-48d5-af5a-3b62d86519bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.077015 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dc4d0fc2-3316-48d5-af5a-3b62d86519bf" (UID: "dc4d0fc2-3316-48d5-af5a-3b62d86519bf"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.080395 5029 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.080445 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.080458 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.080475 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82vbc\" (UniqueName: \"kubernetes.io/projected/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-kube-api-access-82vbc\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.098882 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc4d0fc2-3316-48d5-af5a-3b62d86519bf" (UID: "dc4d0fc2-3316-48d5-af5a-3b62d86519bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.139105 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-config-data" (OuterVolumeSpecName: "config-data") pod "dc4d0fc2-3316-48d5-af5a-3b62d86519bf" (UID: "dc4d0fc2-3316-48d5-af5a-3b62d86519bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.149508 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557252-qnqbb"] Mar 13 20:52:00 crc kubenswrapper[5029]: E0313 20:52:00.150375 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="proxy-httpd" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.150466 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="proxy-httpd" Mar 13 20:52:00 crc kubenswrapper[5029]: E0313 20:52:00.150566 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="ceilometer-notification-agent" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.150648 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="ceilometer-notification-agent" Mar 13 20:52:00 crc kubenswrapper[5029]: E0313 20:52:00.150731 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="ceilometer-central-agent" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.150801 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="ceilometer-central-agent" Mar 13 20:52:00 crc kubenswrapper[5029]: E0313 20:52:00.150896 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="sg-core" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.150960 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="sg-core" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.151276 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="ceilometer-central-agent" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.151364 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="sg-core" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.151447 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="ceilometer-notification-agent" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.151525 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" containerName="proxy-httpd" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.152415 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.155421 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.155689 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.156163 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.163205 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557252-qnqbb"] Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.183955 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62f4x\" (UniqueName: \"kubernetes.io/projected/793cd1b3-1bef-48e6-8a58-1a475d06d99f-kube-api-access-62f4x\") pod \"auto-csr-approver-29557252-qnqbb\" (UID: \"793cd1b3-1bef-48e6-8a58-1a475d06d99f\") " pod="openshift-infra/auto-csr-approver-29557252-qnqbb" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.184126 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.184146 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d0fc2-3316-48d5-af5a-3b62d86519bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.286399 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62f4x\" (UniqueName: \"kubernetes.io/projected/793cd1b3-1bef-48e6-8a58-1a475d06d99f-kube-api-access-62f4x\") pod \"auto-csr-approver-29557252-qnqbb\" (UID: \"793cd1b3-1bef-48e6-8a58-1a475d06d99f\") " pod="openshift-infra/auto-csr-approver-29557252-qnqbb" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.305999 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62f4x\" (UniqueName: \"kubernetes.io/projected/793cd1b3-1bef-48e6-8a58-1a475d06d99f-kube-api-access-62f4x\") pod \"auto-csr-approver-29557252-qnqbb\" (UID: \"793cd1b3-1bef-48e6-8a58-1a475d06d99f\") " pod="openshift-infra/auto-csr-approver-29557252-qnqbb" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.530286 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.728692 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc4d0fc2-3316-48d5-af5a-3b62d86519bf","Type":"ContainerDied","Data":"2738a8352ec5eebecd2a54fd5ff6f5a3a78243a65a1cf76a65602c0aae12d613"} Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.728829 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.733039 5029 scope.go:117] "RemoveContainer" containerID="9b03f9203983dd44ef5a7ad30a99c5c4f3234fd0b6694fe439326b4bfc63c1b7" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.775266 5029 scope.go:117] "RemoveContainer" containerID="d13d100eab9cd59b20012e91b029d89372d35a43073cc9fdc21772af64f2a8cc" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.812909 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.836747 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.845232 5029 scope.go:117] "RemoveContainer" containerID="fc00b7ad637e894ceaa0c64535b85769625e77d04595ace0d589dffb9a5ad277" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.867732 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.871434 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.880515 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.880542 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.880955 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.892874 5029 scope.go:117] "RemoveContainer" containerID="23e6bc9c14534d6f845db8d15e20e317e8b796d307edb1dc1563d6aedbeb1c7f" Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.897060 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:00 crc kubenswrapper[5029]: I0313 20:52:00.958617 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:00 crc kubenswrapper[5029]: E0313 20:52:00.960015 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-hph7t log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.006107 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-log-httpd\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.006260 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-run-httpd\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.006313 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.006363 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hph7t\" (UniqueName: \"kubernetes.io/projected/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-kube-api-access-hph7t\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.006458 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.006508 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-config-data\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.006532 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.006566 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-scripts\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.073635 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557252-qnqbb"] Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.109838 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-log-httpd\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.109902 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-run-httpd\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.109938 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.109965 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hph7t\" (UniqueName: \"kubernetes.io/projected/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-kube-api-access-hph7t\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.109995 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.110045 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-config-data\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.110072 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.110102 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-scripts\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.110436 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-log-httpd\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.111682 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-run-httpd\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.118156 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.118321 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.119675 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-config-data\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.120513 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-scripts\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.120909 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.130793 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hph7t\" (UniqueName: \"kubernetes.io/projected/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-kube-api-access-hph7t\") pod \"ceilometer-0\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.742614 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" event={"ID":"793cd1b3-1bef-48e6-8a58-1a475d06d99f","Type":"ContainerStarted","Data":"54c521d0ce8b849392017bee9cbf15ff743b984e5f3fd40fc93d6d4815a62d58"} Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.746184 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.761772 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.934266 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-run-httpd\") pod \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.934716 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" (UID: "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.934808 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-config-data\") pod \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.934871 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-combined-ca-bundle\") pod \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.934918 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hph7t\" (UniqueName: \"kubernetes.io/projected/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-kube-api-access-hph7t\") pod \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.934963 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-sg-core-conf-yaml\") pod \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.935009 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-ceilometer-tls-certs\") pod \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.935104 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-log-httpd\") pod \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.935132 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-scripts\") pod \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\" (UID: \"b0a5eb15-bbd1-48e8-a97c-80e3e043d58f\") " Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.935541 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" (UID: "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.935663 5029 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.935682 5029 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.942103 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" (UID: "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.942258 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-kube-api-access-hph7t" (OuterVolumeSpecName: "kube-api-access-hph7t") pod "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" (UID: "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f"). InnerVolumeSpecName "kube-api-access-hph7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.943624 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" (UID: "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.944261 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-scripts" (OuterVolumeSpecName: "scripts") pod "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" (UID: "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.944829 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-config-data" (OuterVolumeSpecName: "config-data") pod "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" (UID: "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:01 crc kubenswrapper[5029]: I0313 20:52:01.949725 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" (UID: "b0a5eb15-bbd1-48e8-a97c-80e3e043d58f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.039646 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.039685 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.039698 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hph7t\" (UniqueName: \"kubernetes.io/projected/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-kube-api-access-hph7t\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.039712 5029 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.039723 5029 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.039733 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.624536 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4d0fc2-3316-48d5-af5a-3b62d86519bf" path="/var/lib/kubelet/pods/dc4d0fc2-3316-48d5-af5a-3b62d86519bf/volumes" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.755204 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.766000 5029 generic.go:334] "Generic (PLEG): container finished" podID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerID="6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a" exitCode=0 Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.766095 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5","Type":"ContainerDied","Data":"6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a"} Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.766127 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5","Type":"ContainerDied","Data":"e02cfd3ef64976cf64b8de0ff0768ab929686394276cacdb018ba8c6f141db74"} Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.766147 5029 scope.go:117] "RemoveContainer" containerID="6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.766299 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.783301 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.785437 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" event={"ID":"793cd1b3-1bef-48e6-8a58-1a475d06d99f","Type":"ContainerStarted","Data":"8af2760983564bb233cc0b2c486069de451daf176e96926ea1b8c6f3542c70e6"} Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.802229 5029 scope.go:117] "RemoveContainer" containerID="b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.822305 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" podStartSLOduration=1.982659188 podStartE2EDuration="2.822284225s" podCreationTimestamp="2026-03-13 20:52:00 +0000 UTC" firstStartedPulling="2026-03-13 20:52:01.075254532 +0000 UTC m=+1481.091336935" lastFinishedPulling="2026-03-13 20:52:01.914879579 +0000 UTC m=+1481.930961972" observedRunningTime="2026-03-13 20:52:02.816432216 +0000 UTC m=+1482.832514619" watchObservedRunningTime="2026-03-13 20:52:02.822284225 +0000 UTC m=+1482.838366628" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.846144 5029 scope.go:117] "RemoveContainer" containerID="6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a" Mar 13 20:52:02 crc kubenswrapper[5029]: E0313 20:52:02.849881 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a\": container with ID starting with 6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a not found: ID does not exist" containerID="6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.849927 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a"} err="failed to get container status \"6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a\": rpc error: code = NotFound desc = could not find container \"6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a\": container with ID starting with 6202f11ee907038d606f602404edc93c05689a728d3863a333d8d6a96072e98a not found: ID does not exist" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.849960 5029 scope.go:117] "RemoveContainer" containerID="b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa" Mar 13 20:52:02 crc kubenswrapper[5029]: E0313 20:52:02.854178 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa\": container with ID starting with b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa not found: ID does not exist" containerID="b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.854211 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa"} err="failed to get container status \"b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa\": rpc error: code = NotFound desc = could not find container \"b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa\": container with ID starting with b4fb428de0432d4110df9aeae37b7caefb92f740cb81b9495daa8feebca449aa not found: ID does not exist" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.861004 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlqpd\" (UniqueName: \"kubernetes.io/projected/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-kube-api-access-xlqpd\") pod \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.861261 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-config-data\") pod \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.861321 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-logs\") pod \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.861454 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-combined-ca-bundle\") pod \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\" (UID: \"ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5\") " Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.864934 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-logs" (OuterVolumeSpecName: "logs") pod "ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" (UID: "ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.888311 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-kube-api-access-xlqpd" (OuterVolumeSpecName: "kube-api-access-xlqpd") pod "ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" (UID: "ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5"). InnerVolumeSpecName "kube-api-access-xlqpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.925769 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" (UID: "ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.967507 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.967548 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlqpd\" (UniqueName: \"kubernetes.io/projected/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-kube-api-access-xlqpd\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.967561 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.978941 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.979508 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-config-data" (OuterVolumeSpecName: "config-data") pod "ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" (UID: "ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:02 crc kubenswrapper[5029]: I0313 20:52:02.999746 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.013664 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:03 crc kubenswrapper[5029]: E0313 20:52:03.014215 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-log" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.014240 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-log" Mar 13 20:52:03 crc kubenswrapper[5029]: E0313 20:52:03.014284 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-api" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.014291 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-api" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.014486 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-api" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.014518 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" containerName="nova-api-log" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.017209 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.024985 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.025184 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.026392 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.033979 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.070693 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.117352 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.132939 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.150627 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.152841 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.155586 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.156439 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.158128 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.160170 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.172367 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd727004-62dc-41e3-91b7-0fb181e9a44e-run-httpd\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.172461 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.172519 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgr7w\" (UniqueName: \"kubernetes.io/projected/dd727004-62dc-41e3-91b7-0fb181e9a44e-kube-api-access-fgr7w\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.172577 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd727004-62dc-41e3-91b7-0fb181e9a44e-log-httpd\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.172607 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.172632 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-scripts\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.172660 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-config-data\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.172678 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.274741 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd727004-62dc-41e3-91b7-0fb181e9a44e-run-httpd\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.274803 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-public-tls-certs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.274839 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-logs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.274882 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275029 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275084 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgr7w\" (UniqueName: \"kubernetes.io/projected/dd727004-62dc-41e3-91b7-0fb181e9a44e-kube-api-access-fgr7w\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275155 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5dsl\" (UniqueName: \"kubernetes.io/projected/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-kube-api-access-l5dsl\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275200 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd727004-62dc-41e3-91b7-0fb181e9a44e-log-httpd\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275230 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275255 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-scripts\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275278 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-config-data\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275297 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275302 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd727004-62dc-41e3-91b7-0fb181e9a44e-run-httpd\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275354 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.275379 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-config-data\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.276185 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd727004-62dc-41e3-91b7-0fb181e9a44e-log-httpd\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.280528 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.284331 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-scripts\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.285158 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.292623 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-config-data\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.293072 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd727004-62dc-41e3-91b7-0fb181e9a44e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.314640 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgr7w\" (UniqueName: \"kubernetes.io/projected/dd727004-62dc-41e3-91b7-0fb181e9a44e-kube-api-access-fgr7w\") pod \"ceilometer-0\" (UID: \"dd727004-62dc-41e3-91b7-0fb181e9a44e\") " pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.363544 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.376965 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5dsl\" (UniqueName: \"kubernetes.io/projected/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-kube-api-access-l5dsl\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.377125 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.377158 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-config-data\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.377205 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-public-tls-certs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.377231 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-logs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.377255 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.381646 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.381684 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-config-data\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.382046 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-public-tls-certs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.382154 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.383365 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-logs\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.402226 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5dsl\" (UniqueName: \"kubernetes.io/projected/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-kube-api-access-l5dsl\") pod \"nova-api-0\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.467414 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.797264 5029 generic.go:334] "Generic (PLEG): container finished" podID="793cd1b3-1bef-48e6-8a58-1a475d06d99f" containerID="8af2760983564bb233cc0b2c486069de451daf176e96926ea1b8c6f3542c70e6" exitCode=0 Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.797755 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" event={"ID":"793cd1b3-1bef-48e6-8a58-1a475d06d99f","Type":"ContainerDied","Data":"8af2760983564bb233cc0b2c486069de451daf176e96926ea1b8c6f3542c70e6"} Mar 13 20:52:03 crc kubenswrapper[5029]: I0313 20:52:03.868541 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.007099 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.613514 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5" path="/var/lib/kubelet/pods/ab47deae-b3a8-4cde-9aa1-8c7a92ef1da5/volumes" Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.614827 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a5eb15-bbd1-48e8-a97c-80e3e043d58f" path="/var/lib/kubelet/pods/b0a5eb15-bbd1-48e8-a97c-80e3e043d58f/volumes" Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.810786 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01865eb3-9fa8-44f8-985c-c25e9e5af7b2","Type":"ContainerStarted","Data":"4ecd809812489fa6d2ba9e0153821a7f4e50c471fd0246c1a65b59775012418c"} Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.810880 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01865eb3-9fa8-44f8-985c-c25e9e5af7b2","Type":"ContainerStarted","Data":"786f5c8c418c19e76dcba4bcebdbe3a09ff7c6bd765b3371107a549006a33c79"} Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.810902 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01865eb3-9fa8-44f8-985c-c25e9e5af7b2","Type":"ContainerStarted","Data":"38434b297dad687c7feb99ad9b694d44c860e24daadc74fbc8ad53f0320fbfe7"} Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.812742 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd727004-62dc-41e3-91b7-0fb181e9a44e","Type":"ContainerStarted","Data":"4d16146f7d32c4ca90f3ef5505c066625dd2b43c7bffa137602a71e556a3cc18"} Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.812767 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd727004-62dc-41e3-91b7-0fb181e9a44e","Type":"ContainerStarted","Data":"c83123321a0efedda2c8c39026cdc0b1a68739964b1f861f5ac23246d95fb353"} Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.843877 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.843833063 podStartE2EDuration="1.843833063s" podCreationTimestamp="2026-03-13 20:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:52:04.832049642 +0000 UTC m=+1484.848132045" watchObservedRunningTime="2026-03-13 20:52:04.843833063 +0000 UTC m=+1484.859915466" Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.911867 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.911909 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 20:52:04 crc kubenswrapper[5029]: I0313 20:52:04.971373 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.015984 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.256665 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.328974 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62f4x\" (UniqueName: \"kubernetes.io/projected/793cd1b3-1bef-48e6-8a58-1a475d06d99f-kube-api-access-62f4x\") pod \"793cd1b3-1bef-48e6-8a58-1a475d06d99f\" (UID: \"793cd1b3-1bef-48e6-8a58-1a475d06d99f\") " Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.335713 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793cd1b3-1bef-48e6-8a58-1a475d06d99f-kube-api-access-62f4x" (OuterVolumeSpecName: "kube-api-access-62f4x") pod "793cd1b3-1bef-48e6-8a58-1a475d06d99f" (UID: "793cd1b3-1bef-48e6-8a58-1a475d06d99f"). InnerVolumeSpecName "kube-api-access-62f4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.432444 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62f4x\" (UniqueName: \"kubernetes.io/projected/793cd1b3-1bef-48e6-8a58-1a475d06d99f-kube-api-access-62f4x\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.825626 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd727004-62dc-41e3-91b7-0fb181e9a44e","Type":"ContainerStarted","Data":"7ba4fd454a89f6adda5e4598f13e874075648b556ca670369f4b6f91c5054210"} Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.830301 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.833589 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557252-qnqbb" event={"ID":"793cd1b3-1bef-48e6-8a58-1a475d06d99f","Type":"ContainerDied","Data":"54c521d0ce8b849392017bee9cbf15ff743b984e5f3fd40fc93d6d4815a62d58"} Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.833655 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54c521d0ce8b849392017bee9cbf15ff743b984e5f3fd40fc93d6d4815a62d58" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.867199 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.904736 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557246-hf9hf"] Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.913868 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557246-hf9hf"] Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.925021 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 20:52:05 crc kubenswrapper[5029]: I0313 20:52:05.925481 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.121146 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.229386 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-72rqn"] Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.229628 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" podUID="72c716ac-a862-41c9-be07-07d0df558b07" containerName="dnsmasq-dns" containerID="cri-o://9bfa6410ff244af78bfaa062053c693be7475335d374271b72f5ed6ad5b5175c" gracePeriod=10 Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.321759 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-cgpjg"] Mar 13 20:52:06 crc kubenswrapper[5029]: E0313 20:52:06.322441 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793cd1b3-1bef-48e6-8a58-1a475d06d99f" containerName="oc" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.322467 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="793cd1b3-1bef-48e6-8a58-1a475d06d99f" containerName="oc" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.322768 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="793cd1b3-1bef-48e6-8a58-1a475d06d99f" containerName="oc" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.324890 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.329181 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.329327 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.345210 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-cgpjg"] Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.472245 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-config-data\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.472673 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wvlw\" (UniqueName: \"kubernetes.io/projected/b351b861-896b-4e82-8636-23800ab0c89c-kube-api-access-6wvlw\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.472733 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.472750 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-scripts\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.574484 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.574629 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-scripts\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.574807 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-config-data\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.574840 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wvlw\" (UniqueName: \"kubernetes.io/projected/b351b861-896b-4e82-8636-23800ab0c89c-kube-api-access-6wvlw\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.586920 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-scripts\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.587013 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.620537 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-config-data\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.633840 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wvlw\" (UniqueName: \"kubernetes.io/projected/b351b861-896b-4e82-8636-23800ab0c89c-kube-api-access-6wvlw\") pod \"nova-cell1-cell-mapping-cgpjg\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.660383 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487af116-ac18-4881-9db5-7c99f89ac667" path="/var/lib/kubelet/pods/487af116-ac18-4881-9db5-7c99f89ac667/volumes" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.770632 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.869892 5029 generic.go:334] "Generic (PLEG): container finished" podID="72c716ac-a862-41c9-be07-07d0df558b07" containerID="9bfa6410ff244af78bfaa062053c693be7475335d374271b72f5ed6ad5b5175c" exitCode=0 Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.870411 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" event={"ID":"72c716ac-a862-41c9-be07-07d0df558b07","Type":"ContainerDied","Data":"9bfa6410ff244af78bfaa062053c693be7475335d374271b72f5ed6ad5b5175c"} Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.883766 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd727004-62dc-41e3-91b7-0fb181e9a44e","Type":"ContainerStarted","Data":"fb67b1bfd573285a35f74996f9d514ada7da986abd1528d0cdb268a1e623dc03"} Mar 13 20:52:06 crc kubenswrapper[5029]: I0313 20:52:06.934758 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.007407 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rznwh\" (UniqueName: \"kubernetes.io/projected/72c716ac-a862-41c9-be07-07d0df558b07-kube-api-access-rznwh\") pod \"72c716ac-a862-41c9-be07-07d0df558b07\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.007540 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-svc\") pod \"72c716ac-a862-41c9-be07-07d0df558b07\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.007651 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-sb\") pod \"72c716ac-a862-41c9-be07-07d0df558b07\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.007823 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-nb\") pod \"72c716ac-a862-41c9-be07-07d0df558b07\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.007878 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-swift-storage-0\") pod \"72c716ac-a862-41c9-be07-07d0df558b07\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.008478 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-config\") pod \"72c716ac-a862-41c9-be07-07d0df558b07\" (UID: \"72c716ac-a862-41c9-be07-07d0df558b07\") " Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.019121 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c716ac-a862-41c9-be07-07d0df558b07-kube-api-access-rznwh" (OuterVolumeSpecName: "kube-api-access-rznwh") pod "72c716ac-a862-41c9-be07-07d0df558b07" (UID: "72c716ac-a862-41c9-be07-07d0df558b07"). InnerVolumeSpecName "kube-api-access-rznwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.103554 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "72c716ac-a862-41c9-be07-07d0df558b07" (UID: "72c716ac-a862-41c9-be07-07d0df558b07"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.109540 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "72c716ac-a862-41c9-be07-07d0df558b07" (UID: "72c716ac-a862-41c9-be07-07d0df558b07"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.116537 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rznwh\" (UniqueName: \"kubernetes.io/projected/72c716ac-a862-41c9-be07-07d0df558b07-kube-api-access-rznwh\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.116562 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.116572 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.165563 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72c716ac-a862-41c9-be07-07d0df558b07" (UID: "72c716ac-a862-41c9-be07-07d0df558b07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.179608 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "72c716ac-a862-41c9-be07-07d0df558b07" (UID: "72c716ac-a862-41c9-be07-07d0df558b07"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.201384 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-config" (OuterVolumeSpecName: "config") pod "72c716ac-a862-41c9-be07-07d0df558b07" (UID: "72c716ac-a862-41c9-be07-07d0df558b07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.219492 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.219580 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.219591 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72c716ac-a862-41c9-be07-07d0df558b07-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.468472 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-cgpjg"] Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.919425 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd727004-62dc-41e3-91b7-0fb181e9a44e","Type":"ContainerStarted","Data":"c2c121500ec9702eaa47d18e302904f2fef1a4d46a501992b551493268146c14"} Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.920030 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.929133 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cgpjg" event={"ID":"b351b861-896b-4e82-8636-23800ab0c89c","Type":"ContainerStarted","Data":"3496b61b9e2ef305e95c3360c42df3a18cb712cb78044cc0aa51450fea1e4444"} Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.929206 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cgpjg" event={"ID":"b351b861-896b-4e82-8636-23800ab0c89c","Type":"ContainerStarted","Data":"2aae75ece7be22f8072dcff41295c1fccf0069645a79bf75518a78db96067ddd"} Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.932034 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" event={"ID":"72c716ac-a862-41c9-be07-07d0df558b07","Type":"ContainerDied","Data":"dee6f94d2b13dc7af83e907487e74967dcfd2f0588a406e15939612a22de240e"} Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.932104 5029 scope.go:117] "RemoveContainer" containerID="9bfa6410ff244af78bfaa062053c693be7475335d374271b72f5ed6ad5b5175c" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.932174 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6c754dc9-72rqn" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.962624 5029 scope.go:117] "RemoveContainer" containerID="58e09f9a565e494930eabdd497fa2dc44718e2e3ff1080dd87942f6a344a3c8b" Mar 13 20:52:07 crc kubenswrapper[5029]: I0313 20:52:07.986604 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.24433263 podStartE2EDuration="5.986578834s" podCreationTimestamp="2026-03-13 20:52:02 +0000 UTC" firstStartedPulling="2026-03-13 20:52:03.862625775 +0000 UTC m=+1483.878708178" lastFinishedPulling="2026-03-13 20:52:07.604871979 +0000 UTC m=+1487.620954382" observedRunningTime="2026-03-13 20:52:07.961973053 +0000 UTC m=+1487.978055476" watchObservedRunningTime="2026-03-13 20:52:07.986578834 +0000 UTC m=+1488.002661237" Mar 13 20:52:08 crc kubenswrapper[5029]: I0313 20:52:08.018528 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-cgpjg" podStartSLOduration=2.018488253 podStartE2EDuration="2.018488253s" podCreationTimestamp="2026-03-13 20:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:52:07.997545763 +0000 UTC m=+1488.013628176" watchObservedRunningTime="2026-03-13 20:52:08.018488253 +0000 UTC m=+1488.034570656" Mar 13 20:52:08 crc kubenswrapper[5029]: I0313 20:52:08.065384 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-72rqn"] Mar 13 20:52:08 crc kubenswrapper[5029]: I0313 20:52:08.077694 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-72rqn"] Mar 13 20:52:08 crc kubenswrapper[5029]: I0313 20:52:08.124628 5029 scope.go:117] "RemoveContainer" containerID="f086fcff288e6521a2e4ee84260004ee2c15f79ab50713a5e54854ece9a46fb2" Mar 13 20:52:08 crc kubenswrapper[5029]: I0313 20:52:08.617192 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c716ac-a862-41c9-be07-07d0df558b07" path="/var/lib/kubelet/pods/72c716ac-a862-41c9-be07-07d0df558b07/volumes" Mar 13 20:52:12 crc kubenswrapper[5029]: I0313 20:52:12.911207 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 20:52:12 crc kubenswrapper[5029]: I0313 20:52:12.911778 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 20:52:13 crc kubenswrapper[5029]: I0313 20:52:13.468598 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 20:52:13 crc kubenswrapper[5029]: I0313 20:52:13.468901 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 20:52:13 crc kubenswrapper[5029]: I0313 20:52:13.994787 5029 generic.go:334] "Generic (PLEG): container finished" podID="b351b861-896b-4e82-8636-23800ab0c89c" containerID="3496b61b9e2ef305e95c3360c42df3a18cb712cb78044cc0aa51450fea1e4444" exitCode=0 Mar 13 20:52:13 crc kubenswrapper[5029]: I0313 20:52:13.994841 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cgpjg" event={"ID":"b351b861-896b-4e82-8636-23800ab0c89c","Type":"ContainerDied","Data":"3496b61b9e2ef305e95c3360c42df3a18cb712cb78044cc0aa51450fea1e4444"} Mar 13 20:52:14 crc kubenswrapper[5029]: I0313 20:52:14.485295 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 20:52:14 crc kubenswrapper[5029]: I0313 20:52:14.485295 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 20:52:14 crc kubenswrapper[5029]: I0313 20:52:14.915730 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 20:52:14 crc kubenswrapper[5029]: I0313 20:52:14.918178 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 20:52:14 crc kubenswrapper[5029]: I0313 20:52:14.933412 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.029731 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.590956 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.672394 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wvlw\" (UniqueName: \"kubernetes.io/projected/b351b861-896b-4e82-8636-23800ab0c89c-kube-api-access-6wvlw\") pod \"b351b861-896b-4e82-8636-23800ab0c89c\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.672599 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-scripts\") pod \"b351b861-896b-4e82-8636-23800ab0c89c\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.672871 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-config-data\") pod \"b351b861-896b-4e82-8636-23800ab0c89c\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.672968 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-combined-ca-bundle\") pod \"b351b861-896b-4e82-8636-23800ab0c89c\" (UID: \"b351b861-896b-4e82-8636-23800ab0c89c\") " Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.689113 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b351b861-896b-4e82-8636-23800ab0c89c-kube-api-access-6wvlw" (OuterVolumeSpecName: "kube-api-access-6wvlw") pod "b351b861-896b-4e82-8636-23800ab0c89c" (UID: "b351b861-896b-4e82-8636-23800ab0c89c"). InnerVolumeSpecName "kube-api-access-6wvlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.702114 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-scripts" (OuterVolumeSpecName: "scripts") pod "b351b861-896b-4e82-8636-23800ab0c89c" (UID: "b351b861-896b-4e82-8636-23800ab0c89c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.720145 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-config-data" (OuterVolumeSpecName: "config-data") pod "b351b861-896b-4e82-8636-23800ab0c89c" (UID: "b351b861-896b-4e82-8636-23800ab0c89c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.720924 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b351b861-896b-4e82-8636-23800ab0c89c" (UID: "b351b861-896b-4e82-8636-23800ab0c89c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.779196 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.779388 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.779458 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wvlw\" (UniqueName: \"kubernetes.io/projected/b351b861-896b-4e82-8636-23800ab0c89c-kube-api-access-6wvlw\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:15 crc kubenswrapper[5029]: I0313 20:52:15.779512 5029 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b351b861-896b-4e82-8636-23800ab0c89c-scripts\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.021752 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cgpjg" Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.021737 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cgpjg" event={"ID":"b351b861-896b-4e82-8636-23800ab0c89c","Type":"ContainerDied","Data":"2aae75ece7be22f8072dcff41295c1fccf0069645a79bf75518a78db96067ddd"} Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.022751 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aae75ece7be22f8072dcff41295c1fccf0069645a79bf75518a78db96067ddd" Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.126452 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.126726 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-log" containerID="cri-o://786f5c8c418c19e76dcba4bcebdbe3a09ff7c6bd765b3371107a549006a33c79" gracePeriod=30 Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.126827 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-api" containerID="cri-o://4ecd809812489fa6d2ba9e0153821a7f4e50c471fd0246c1a65b59775012418c" gracePeriod=30 Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.150916 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.151191 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cb77ad0e-0a71-465a-a2bf-eb94354aa22e" containerName="nova-scheduler-scheduler" containerID="cri-o://c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19" gracePeriod=30 Mar 13 20:52:16 crc kubenswrapper[5029]: I0313 20:52:16.187721 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.045312 5029 generic.go:334] "Generic (PLEG): container finished" podID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerID="786f5c8c418c19e76dcba4bcebdbe3a09ff7c6bd765b3371107a549006a33c79" exitCode=143 Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.046648 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01865eb3-9fa8-44f8-985c-c25e9e5af7b2","Type":"ContainerDied","Data":"786f5c8c418c19e76dcba4bcebdbe3a09ff7c6bd765b3371107a549006a33c79"} Mar 13 20:52:17 crc kubenswrapper[5029]: E0313 20:52:17.475459 5029 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19 is running failed: container process not found" containerID="c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 20:52:17 crc kubenswrapper[5029]: E0313 20:52:17.476274 5029 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19 is running failed: container process not found" containerID="c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 20:52:17 crc kubenswrapper[5029]: E0313 20:52:17.476614 5029 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19 is running failed: container process not found" containerID="c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 20:52:17 crc kubenswrapper[5029]: E0313 20:52:17.476660 5029 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cb77ad0e-0a71-465a-a2bf-eb94354aa22e" containerName="nova-scheduler-scheduler" Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.623557 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.725896 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-combined-ca-bundle\") pod \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.725948 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhp9n\" (UniqueName: \"kubernetes.io/projected/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-kube-api-access-qhp9n\") pod \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.726014 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-config-data\") pod \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\" (UID: \"cb77ad0e-0a71-465a-a2bf-eb94354aa22e\") " Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.733033 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-kube-api-access-qhp9n" (OuterVolumeSpecName: "kube-api-access-qhp9n") pod "cb77ad0e-0a71-465a-a2bf-eb94354aa22e" (UID: "cb77ad0e-0a71-465a-a2bf-eb94354aa22e"). InnerVolumeSpecName "kube-api-access-qhp9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.760550 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb77ad0e-0a71-465a-a2bf-eb94354aa22e" (UID: "cb77ad0e-0a71-465a-a2bf-eb94354aa22e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.768036 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-config-data" (OuterVolumeSpecName: "config-data") pod "cb77ad0e-0a71-465a-a2bf-eb94354aa22e" (UID: "cb77ad0e-0a71-465a-a2bf-eb94354aa22e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.828350 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.828390 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhp9n\" (UniqueName: \"kubernetes.io/projected/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-kube-api-access-qhp9n\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:17 crc kubenswrapper[5029]: I0313 20:52:17.828404 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb77ad0e-0a71-465a-a2bf-eb94354aa22e-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.056651 5029 generic.go:334] "Generic (PLEG): container finished" podID="cb77ad0e-0a71-465a-a2bf-eb94354aa22e" containerID="c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19" exitCode=0 Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.057116 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.057108 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cb77ad0e-0a71-465a-a2bf-eb94354aa22e","Type":"ContainerDied","Data":"c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19"} Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.057253 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cb77ad0e-0a71-465a-a2bf-eb94354aa22e","Type":"ContainerDied","Data":"1e1d2c900e6d76436d3fd6d503363e9325832ebaa373a08bb606611a3a9ac160"} Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.057286 5029 scope.go:117] "RemoveContainer" containerID="c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.057639 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-log" containerID="cri-o://59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449" gracePeriod=30 Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.057727 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-metadata" containerID="cri-o://9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd" gracePeriod=30 Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.107359 5029 scope.go:117] "RemoveContainer" containerID="c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19" Mar 13 20:52:18 crc kubenswrapper[5029]: E0313 20:52:18.108093 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19\": container with ID starting with c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19 not found: ID does not exist" containerID="c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.108131 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19"} err="failed to get container status \"c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19\": rpc error: code = NotFound desc = could not find container \"c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19\": container with ID starting with c5d68eb9dcb18b9a0ef82cedf0937e2279346a89eef7171c9060a880121b8f19 not found: ID does not exist" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.114271 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.162410 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.183342 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:52:18 crc kubenswrapper[5029]: E0313 20:52:18.184027 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c716ac-a862-41c9-be07-07d0df558b07" containerName="init" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.184045 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c716ac-a862-41c9-be07-07d0df558b07" containerName="init" Mar 13 20:52:18 crc kubenswrapper[5029]: E0313 20:52:18.184088 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c716ac-a862-41c9-be07-07d0df558b07" containerName="dnsmasq-dns" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.184102 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c716ac-a862-41c9-be07-07d0df558b07" containerName="dnsmasq-dns" Mar 13 20:52:18 crc kubenswrapper[5029]: E0313 20:52:18.184166 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb77ad0e-0a71-465a-a2bf-eb94354aa22e" containerName="nova-scheduler-scheduler" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.184173 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb77ad0e-0a71-465a-a2bf-eb94354aa22e" containerName="nova-scheduler-scheduler" Mar 13 20:52:18 crc kubenswrapper[5029]: E0313 20:52:18.184182 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b351b861-896b-4e82-8636-23800ab0c89c" containerName="nova-manage" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.184187 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b351b861-896b-4e82-8636-23800ab0c89c" containerName="nova-manage" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.184513 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb77ad0e-0a71-465a-a2bf-eb94354aa22e" containerName="nova-scheduler-scheduler" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.184545 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="b351b861-896b-4e82-8636-23800ab0c89c" containerName="nova-manage" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.184560 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c716ac-a862-41c9-be07-07d0df558b07" containerName="dnsmasq-dns" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.185441 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.191376 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.196171 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.252281 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt452\" (UniqueName: \"kubernetes.io/projected/4267cfcc-949c-4fc5-8564-e11f5be38d85-kube-api-access-qt452\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.252390 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4267cfcc-949c-4fc5-8564-e11f5be38d85-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.252494 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4267cfcc-949c-4fc5-8564-e11f5be38d85-config-data\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.354638 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt452\" (UniqueName: \"kubernetes.io/projected/4267cfcc-949c-4fc5-8564-e11f5be38d85-kube-api-access-qt452\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.354747 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4267cfcc-949c-4fc5-8564-e11f5be38d85-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.354824 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4267cfcc-949c-4fc5-8564-e11f5be38d85-config-data\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.361081 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4267cfcc-949c-4fc5-8564-e11f5be38d85-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.366199 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4267cfcc-949c-4fc5-8564-e11f5be38d85-config-data\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.376340 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt452\" (UniqueName: \"kubernetes.io/projected/4267cfcc-949c-4fc5-8564-e11f5be38d85-kube-api-access-qt452\") pod \"nova-scheduler-0\" (UID: \"4267cfcc-949c-4fc5-8564-e11f5be38d85\") " pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.533639 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 20:52:18 crc kubenswrapper[5029]: I0313 20:52:18.613524 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb77ad0e-0a71-465a-a2bf-eb94354aa22e" path="/var/lib/kubelet/pods/cb77ad0e-0a71-465a-a2bf-eb94354aa22e/volumes" Mar 13 20:52:19 crc kubenswrapper[5029]: I0313 20:52:19.031673 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 20:52:19 crc kubenswrapper[5029]: I0313 20:52:19.071643 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4267cfcc-949c-4fc5-8564-e11f5be38d85","Type":"ContainerStarted","Data":"afed4b343e39e2b6d55172c0f0605e0b627b2b3655513a03c01fee982121e065"} Mar 13 20:52:19 crc kubenswrapper[5029]: I0313 20:52:19.075018 5029 generic.go:334] "Generic (PLEG): container finished" podID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerID="59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449" exitCode=143 Mar 13 20:52:19 crc kubenswrapper[5029]: I0313 20:52:19.075076 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d","Type":"ContainerDied","Data":"59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449"} Mar 13 20:52:20 crc kubenswrapper[5029]: I0313 20:52:20.087530 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4267cfcc-949c-4fc5-8564-e11f5be38d85","Type":"ContainerStarted","Data":"c4f26083e767f7a7dbc2d843cfdeac280a2a946ad9a0556164acec811e9d3242"} Mar 13 20:52:20 crc kubenswrapper[5029]: I0313 20:52:20.102894 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.102873059 podStartE2EDuration="2.102873059s" podCreationTimestamp="2026-03-13 20:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:52:20.101673296 +0000 UTC m=+1500.117755699" watchObservedRunningTime="2026-03-13 20:52:20.102873059 +0000 UTC m=+1500.118955462" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.104220 5029 generic.go:334] "Generic (PLEG): container finished" podID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerID="4ecd809812489fa6d2ba9e0153821a7f4e50c471fd0246c1a65b59775012418c" exitCode=0 Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.104295 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01865eb3-9fa8-44f8-985c-c25e9e5af7b2","Type":"ContainerDied","Data":"4ecd809812489fa6d2ba9e0153821a7f4e50c471fd0246c1a65b59775012418c"} Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.104346 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01865eb3-9fa8-44f8-985c-c25e9e5af7b2","Type":"ContainerDied","Data":"38434b297dad687c7feb99ad9b694d44c860e24daadc74fbc8ad53f0320fbfe7"} Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.104358 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38434b297dad687c7feb99ad9b694d44c860e24daadc74fbc8ad53f0320fbfe7" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.176911 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.221266 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-config-data\") pod \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.221351 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-public-tls-certs\") pod \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.221409 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-internal-tls-certs\") pod \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.221975 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5dsl\" (UniqueName: \"kubernetes.io/projected/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-kube-api-access-l5dsl\") pod \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.222098 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-logs\") pod \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.222130 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-combined-ca-bundle\") pod \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\" (UID: \"01865eb3-9fa8-44f8-985c-c25e9e5af7b2\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.226529 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-logs" (OuterVolumeSpecName: "logs") pod "01865eb3-9fa8-44f8-985c-c25e9e5af7b2" (UID: "01865eb3-9fa8-44f8-985c-c25e9e5af7b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.238105 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-kube-api-access-l5dsl" (OuterVolumeSpecName: "kube-api-access-l5dsl") pod "01865eb3-9fa8-44f8-985c-c25e9e5af7b2" (UID: "01865eb3-9fa8-44f8-985c-c25e9e5af7b2"). InnerVolumeSpecName "kube-api-access-l5dsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.261171 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-config-data" (OuterVolumeSpecName: "config-data") pod "01865eb3-9fa8-44f8-985c-c25e9e5af7b2" (UID: "01865eb3-9fa8-44f8-985c-c25e9e5af7b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.263029 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01865eb3-9fa8-44f8-985c-c25e9e5af7b2" (UID: "01865eb3-9fa8-44f8-985c-c25e9e5af7b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.290184 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "01865eb3-9fa8-44f8-985c-c25e9e5af7b2" (UID: "01865eb3-9fa8-44f8-985c-c25e9e5af7b2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.314866 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "01865eb3-9fa8-44f8-985c-c25e9e5af7b2" (UID: "01865eb3-9fa8-44f8-985c-c25e9e5af7b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.325235 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.325269 5029 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.325281 5029 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.325290 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5dsl\" (UniqueName: \"kubernetes.io/projected/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-kube-api-access-l5dsl\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.325301 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.325309 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01865eb3-9fa8-44f8-985c-c25e9e5af7b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.569374 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.630277 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxzn\" (UniqueName: \"kubernetes.io/projected/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-kube-api-access-9nxzn\") pod \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.630448 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-nova-metadata-tls-certs\") pod \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.630525 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-config-data\") pod \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.630563 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-logs\") pod \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.630662 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-combined-ca-bundle\") pod \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\" (UID: \"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d\") " Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.631743 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-logs" (OuterVolumeSpecName: "logs") pod "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" (UID: "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.635613 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-kube-api-access-9nxzn" (OuterVolumeSpecName: "kube-api-access-9nxzn") pod "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" (UID: "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d"). InnerVolumeSpecName "kube-api-access-9nxzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.668998 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-config-data" (OuterVolumeSpecName: "config-data") pod "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" (UID: "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.687240 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" (UID: "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.698606 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" (UID: "a313bf88-42e1-4ce1-98e4-2b5fab75ec6d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.734061 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.734113 5029 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-logs\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.734126 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.734139 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxzn\" (UniqueName: \"kubernetes.io/projected/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-kube-api-access-9nxzn\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:21 crc kubenswrapper[5029]: I0313 20:52:21.734150 5029 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.116453 5029 generic.go:334] "Generic (PLEG): container finished" podID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerID="9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd" exitCode=0 Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.116524 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.117236 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.116549 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d","Type":"ContainerDied","Data":"9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd"} Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.117349 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a313bf88-42e1-4ce1-98e4-2b5fab75ec6d","Type":"ContainerDied","Data":"0efca41f12e7905b7ab2f585b9772698ce50d3f72d5bed45cd5bc00c4ae537eb"} Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.117381 5029 scope.go:117] "RemoveContainer" containerID="9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.145412 5029 scope.go:117] "RemoveContainer" containerID="59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.160767 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.175337 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.199266 5029 scope.go:117] "RemoveContainer" containerID="9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.199385 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:22 crc kubenswrapper[5029]: E0313 20:52:22.199926 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd\": container with ID starting with 9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd not found: ID does not exist" containerID="9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.199966 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd"} err="failed to get container status \"9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd\": rpc error: code = NotFound desc = could not find container \"9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd\": container with ID starting with 9bb5da141f8b225de38acc02c37738b3514b92bf8f388e409e5c5c8c2f7bf3cd not found: ID does not exist" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.199997 5029 scope.go:117] "RemoveContainer" containerID="59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449" Mar 13 20:52:22 crc kubenswrapper[5029]: E0313 20:52:22.200656 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449\": container with ID starting with 59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449 not found: ID does not exist" containerID="59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.200684 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449"} err="failed to get container status \"59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449\": rpc error: code = NotFound desc = could not find container \"59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449\": container with ID starting with 59a7d1089b1b1a096a3741b05fa8019aa7d68a10f6ca4c16ea01079088431449 not found: ID does not exist" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.225057 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:52:22 crc kubenswrapper[5029]: E0313 20:52:22.225891 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-log" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.225914 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-log" Mar 13 20:52:22 crc kubenswrapper[5029]: E0313 20:52:22.225934 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-api" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.225963 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-api" Mar 13 20:52:22 crc kubenswrapper[5029]: E0313 20:52:22.225982 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-metadata" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.225989 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-metadata" Mar 13 20:52:22 crc kubenswrapper[5029]: E0313 20:52:22.225999 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-log" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.226006 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-log" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.226305 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-metadata" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.226330 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" containerName="nova-metadata-log" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.226361 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-log" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.226369 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" containerName="nova-api-api" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.227897 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.231501 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.231614 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.243871 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.256831 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.266260 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.268423 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.271137 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.271405 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.271422 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.281242 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.350356 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.350702 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.350835 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9sl\" (UniqueName: \"kubernetes.io/projected/8407884a-22ef-4825-86e3-829a7235545f-kube-api-access-mx9sl\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.351105 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.351217 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-public-tls-certs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.351329 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295r6\" (UniqueName: \"kubernetes.io/projected/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-kube-api-access-295r6\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.351418 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8407884a-22ef-4825-86e3-829a7235545f-logs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.351586 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-config-data\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.351692 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.351788 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-logs\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.351917 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-config-data\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.453748 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.453839 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx9sl\" (UniqueName: \"kubernetes.io/projected/8407884a-22ef-4825-86e3-829a7235545f-kube-api-access-mx9sl\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.453901 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.453938 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-public-tls-certs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.453979 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-295r6\" (UniqueName: \"kubernetes.io/projected/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-kube-api-access-295r6\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.454006 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8407884a-22ef-4825-86e3-829a7235545f-logs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.454081 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-config-data\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.454101 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.454127 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-logs\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.454169 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-config-data\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.454204 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.454546 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8407884a-22ef-4825-86e3-829a7235545f-logs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.454934 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-logs\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.459415 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.459622 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.460238 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.460452 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-public-tls-certs\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.464135 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-config-data\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.464789 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-config-data\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.473012 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8407884a-22ef-4825-86e3-829a7235545f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.474295 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-295r6\" (UniqueName: \"kubernetes.io/projected/fa10f96c-8f94-48e8-8eb3-e0d7692e470e-kube-api-access-295r6\") pod \"nova-metadata-0\" (UID: \"fa10f96c-8f94-48e8-8eb3-e0d7692e470e\") " pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.477683 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx9sl\" (UniqueName: \"kubernetes.io/projected/8407884a-22ef-4825-86e3-829a7235545f-kube-api-access-mx9sl\") pod \"nova-api-0\" (UID: \"8407884a-22ef-4825-86e3-829a7235545f\") " pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.557889 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.587819 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.612601 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01865eb3-9fa8-44f8-985c-c25e9e5af7b2" path="/var/lib/kubelet/pods/01865eb3-9fa8-44f8-985c-c25e9e5af7b2/volumes" Mar 13 20:52:22 crc kubenswrapper[5029]: I0313 20:52:22.613751 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a313bf88-42e1-4ce1-98e4-2b5fab75ec6d" path="/var/lib/kubelet/pods/a313bf88-42e1-4ce1-98e4-2b5fab75ec6d/volumes" Mar 13 20:52:23 crc kubenswrapper[5029]: I0313 20:52:23.133287 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 20:52:23 crc kubenswrapper[5029]: I0313 20:52:23.167419 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 20:52:23 crc kubenswrapper[5029]: I0313 20:52:23.534497 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 20:52:24 crc kubenswrapper[5029]: I0313 20:52:24.149412 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8407884a-22ef-4825-86e3-829a7235545f","Type":"ContainerStarted","Data":"018dc2b462f70a71244f39652eb5596d80664cc822ce7d167972d9cf8a5bc607"} Mar 13 20:52:24 crc kubenswrapper[5029]: I0313 20:52:24.150030 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8407884a-22ef-4825-86e3-829a7235545f","Type":"ContainerStarted","Data":"fe3d8892e34203d085bfc10d38e850d071eec550d6e2f891147006dff986640c"} Mar 13 20:52:24 crc kubenswrapper[5029]: I0313 20:52:24.150071 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8407884a-22ef-4825-86e3-829a7235545f","Type":"ContainerStarted","Data":"f74beb187bded11974f6cc43c8f7d52bfc2d5a8ea0160b31237313070d635506"} Mar 13 20:52:24 crc kubenswrapper[5029]: I0313 20:52:24.155012 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa10f96c-8f94-48e8-8eb3-e0d7692e470e","Type":"ContainerStarted","Data":"032b4adcbe6c2c90991cce3f27b960c2776f9dcb575342d4fdcc3e9a6b2c5595"} Mar 13 20:52:24 crc kubenswrapper[5029]: I0313 20:52:24.155078 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa10f96c-8f94-48e8-8eb3-e0d7692e470e","Type":"ContainerStarted","Data":"71625a66778da1470d8c089c5c181935c9d69037405fafc183d3d2e4c990b152"} Mar 13 20:52:24 crc kubenswrapper[5029]: I0313 20:52:24.155094 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa10f96c-8f94-48e8-8eb3-e0d7692e470e","Type":"ContainerStarted","Data":"d4a72922afa96922dfe9553caf9c3ec2a0fd8d09dc7c838243a22992559750be"} Mar 13 20:52:24 crc kubenswrapper[5029]: I0313 20:52:24.186836 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.186817597 podStartE2EDuration="2.186817597s" podCreationTimestamp="2026-03-13 20:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:52:24.179834816 +0000 UTC m=+1504.195917239" watchObservedRunningTime="2026-03-13 20:52:24.186817597 +0000 UTC m=+1504.202900000" Mar 13 20:52:24 crc kubenswrapper[5029]: I0313 20:52:24.213021 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.21299395 podStartE2EDuration="2.21299395s" podCreationTimestamp="2026-03-13 20:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:52:24.211587522 +0000 UTC m=+1504.227669925" watchObservedRunningTime="2026-03-13 20:52:24.21299395 +0000 UTC m=+1504.229076353" Mar 13 20:52:28 crc kubenswrapper[5029]: I0313 20:52:28.534656 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 20:52:28 crc kubenswrapper[5029]: I0313 20:52:28.571966 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 20:52:29 crc kubenswrapper[5029]: I0313 20:52:29.244296 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 20:52:31 crc kubenswrapper[5029]: I0313 20:52:31.950230 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:52:31 crc kubenswrapper[5029]: I0313 20:52:31.951180 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:52:32 crc kubenswrapper[5029]: I0313 20:52:32.558488 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 20:52:32 crc kubenswrapper[5029]: I0313 20:52:32.558555 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 20:52:32 crc kubenswrapper[5029]: I0313 20:52:32.588421 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 20:52:32 crc kubenswrapper[5029]: I0313 20:52:32.588471 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 20:52:33 crc kubenswrapper[5029]: I0313 20:52:33.383765 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 13 20:52:33 crc kubenswrapper[5029]: I0313 20:52:33.574514 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fa10f96c-8f94-48e8-8eb3-e0d7692e470e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 20:52:33 crc kubenswrapper[5029]: I0313 20:52:33.574478 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fa10f96c-8f94-48e8-8eb3-e0d7692e470e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 20:52:33 crc kubenswrapper[5029]: I0313 20:52:33.605993 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8407884a-22ef-4825-86e3-829a7235545f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 20:52:33 crc kubenswrapper[5029]: I0313 20:52:33.606815 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8407884a-22ef-4825-86e3-829a7235545f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.303461 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vk875"] Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.306222 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.321194 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk875"] Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.398658 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-utilities\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.398879 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-catalog-content\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.399163 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nrj\" (UniqueName: \"kubernetes.io/projected/7f4af81c-9a89-400f-aaa5-fac0c444aacf-kube-api-access-67nrj\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.501167 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67nrj\" (UniqueName: \"kubernetes.io/projected/7f4af81c-9a89-400f-aaa5-fac0c444aacf-kube-api-access-67nrj\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.501335 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-utilities\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.501401 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-catalog-content\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.501896 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-utilities\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.502014 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-catalog-content\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.525367 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67nrj\" (UniqueName: \"kubernetes.io/projected/7f4af81c-9a89-400f-aaa5-fac0c444aacf-kube-api-access-67nrj\") pod \"redhat-operators-vk875\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:36 crc kubenswrapper[5029]: I0313 20:52:36.664527 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:37 crc kubenswrapper[5029]: I0313 20:52:37.171491 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk875"] Mar 13 20:52:37 crc kubenswrapper[5029]: W0313 20:52:37.172447 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f4af81c_9a89_400f_aaa5_fac0c444aacf.slice/crio-7f73521ff90bdc830a7ca2ea9e40e6bf1a6c14a89f35a08e076f60e3035961bf WatchSource:0}: Error finding container 7f73521ff90bdc830a7ca2ea9e40e6bf1a6c14a89f35a08e076f60e3035961bf: Status 404 returned error can't find the container with id 7f73521ff90bdc830a7ca2ea9e40e6bf1a6c14a89f35a08e076f60e3035961bf Mar 13 20:52:37 crc kubenswrapper[5029]: I0313 20:52:37.289833 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk875" event={"ID":"7f4af81c-9a89-400f-aaa5-fac0c444aacf","Type":"ContainerStarted","Data":"7f73521ff90bdc830a7ca2ea9e40e6bf1a6c14a89f35a08e076f60e3035961bf"} Mar 13 20:52:38 crc kubenswrapper[5029]: I0313 20:52:38.308664 5029 generic.go:334] "Generic (PLEG): container finished" podID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerID="b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857" exitCode=0 Mar 13 20:52:38 crc kubenswrapper[5029]: I0313 20:52:38.308738 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk875" event={"ID":"7f4af81c-9a89-400f-aaa5-fac0c444aacf","Type":"ContainerDied","Data":"b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857"} Mar 13 20:52:40 crc kubenswrapper[5029]: I0313 20:52:40.331715 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk875" event={"ID":"7f4af81c-9a89-400f-aaa5-fac0c444aacf","Type":"ContainerStarted","Data":"8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d"} Mar 13 20:52:40 crc kubenswrapper[5029]: I0313 20:52:40.558889 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 20:52:40 crc kubenswrapper[5029]: I0313 20:52:40.559217 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 20:52:40 crc kubenswrapper[5029]: I0313 20:52:40.588823 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 20:52:40 crc kubenswrapper[5029]: I0313 20:52:40.589911 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.357172 5029 generic.go:334] "Generic (PLEG): container finished" podID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerID="8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d" exitCode=0 Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.357238 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk875" event={"ID":"7f4af81c-9a89-400f-aaa5-fac0c444aacf","Type":"ContainerDied","Data":"8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d"} Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.566004 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.566598 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.571530 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.572534 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.595402 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.596409 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 20:52:42 crc kubenswrapper[5029]: I0313 20:52:42.613087 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 20:52:43 crc kubenswrapper[5029]: I0313 20:52:43.377243 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk875" event={"ID":"7f4af81c-9a89-400f-aaa5-fac0c444aacf","Type":"ContainerStarted","Data":"ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146"} Mar 13 20:52:43 crc kubenswrapper[5029]: I0313 20:52:43.385974 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 20:52:43 crc kubenswrapper[5029]: I0313 20:52:43.403747 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vk875" podStartSLOduration=2.9820274490000003 podStartE2EDuration="7.403720075s" podCreationTimestamp="2026-03-13 20:52:36 +0000 UTC" firstStartedPulling="2026-03-13 20:52:38.312641053 +0000 UTC m=+1518.328723456" lastFinishedPulling="2026-03-13 20:52:42.734333679 +0000 UTC m=+1522.750416082" observedRunningTime="2026-03-13 20:52:43.399020268 +0000 UTC m=+1523.415102681" watchObservedRunningTime="2026-03-13 20:52:43.403720075 +0000 UTC m=+1523.419802478" Mar 13 20:52:46 crc kubenswrapper[5029]: I0313 20:52:46.665573 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:46 crc kubenswrapper[5029]: I0313 20:52:46.666314 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:47 crc kubenswrapper[5029]: I0313 20:52:47.715918 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vk875" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="registry-server" probeResult="failure" output=< Mar 13 20:52:47 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 20:52:47 crc kubenswrapper[5029]: > Mar 13 20:52:52 crc kubenswrapper[5029]: I0313 20:52:52.346870 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:52:53 crc kubenswrapper[5029]: I0313 20:52:53.606470 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:52:56 crc kubenswrapper[5029]: I0313 20:52:56.720310 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:56 crc kubenswrapper[5029]: I0313 20:52:56.777868 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:56 crc kubenswrapper[5029]: I0313 20:52:56.974520 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vk875"] Mar 13 20:52:58 crc kubenswrapper[5029]: I0313 20:52:58.063665 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" containerName="rabbitmq" containerID="cri-o://d260845f68e5a0c459e71dd0e85ae681987421511bd7bc9fb3a2e532dad36481" gracePeriod=604795 Mar 13 20:52:58 crc kubenswrapper[5029]: I0313 20:52:58.253836 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="016118a1-8825-4373-a487-2fa17c45488a" containerName="rabbitmq" containerID="cri-o://e63b2d48cb795baddd2da68a783532d53775c5d80b3c7700de4f8fdde679face" gracePeriod=604796 Mar 13 20:52:58 crc kubenswrapper[5029]: I0313 20:52:58.530661 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vk875" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="registry-server" containerID="cri-o://ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146" gracePeriod=2 Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.038731 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.148114 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67nrj\" (UniqueName: \"kubernetes.io/projected/7f4af81c-9a89-400f-aaa5-fac0c444aacf-kube-api-access-67nrj\") pod \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.148267 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-catalog-content\") pod \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.148360 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-utilities\") pod \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\" (UID: \"7f4af81c-9a89-400f-aaa5-fac0c444aacf\") " Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.149716 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-utilities" (OuterVolumeSpecName: "utilities") pod "7f4af81c-9a89-400f-aaa5-fac0c444aacf" (UID: "7f4af81c-9a89-400f-aaa5-fac0c444aacf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.159426 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4af81c-9a89-400f-aaa5-fac0c444aacf-kube-api-access-67nrj" (OuterVolumeSpecName: "kube-api-access-67nrj") pod "7f4af81c-9a89-400f-aaa5-fac0c444aacf" (UID: "7f4af81c-9a89-400f-aaa5-fac0c444aacf"). InnerVolumeSpecName "kube-api-access-67nrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.251140 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67nrj\" (UniqueName: \"kubernetes.io/projected/7f4af81c-9a89-400f-aaa5-fac0c444aacf-kube-api-access-67nrj\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.251468 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.286449 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f4af81c-9a89-400f-aaa5-fac0c444aacf" (UID: "7f4af81c-9a89-400f-aaa5-fac0c444aacf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.354660 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f4af81c-9a89-400f-aaa5-fac0c444aacf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.543167 5029 generic.go:334] "Generic (PLEG): container finished" podID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerID="ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146" exitCode=0 Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.543239 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk875" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.543637 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk875" event={"ID":"7f4af81c-9a89-400f-aaa5-fac0c444aacf","Type":"ContainerDied","Data":"ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146"} Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.543772 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk875" event={"ID":"7f4af81c-9a89-400f-aaa5-fac0c444aacf","Type":"ContainerDied","Data":"7f73521ff90bdc830a7ca2ea9e40e6bf1a6c14a89f35a08e076f60e3035961bf"} Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.543880 5029 scope.go:117] "RemoveContainer" containerID="ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.568723 5029 scope.go:117] "RemoveContainer" containerID="8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.593218 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vk875"] Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.597942 5029 scope.go:117] "RemoveContainer" containerID="b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.611196 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vk875"] Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.661891 5029 scope.go:117] "RemoveContainer" containerID="ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146" Mar 13 20:52:59 crc kubenswrapper[5029]: E0313 20:52:59.662443 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146\": container with ID starting with ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146 not found: ID does not exist" containerID="ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.662492 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146"} err="failed to get container status \"ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146\": rpc error: code = NotFound desc = could not find container \"ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146\": container with ID starting with ae496ab73abc5d25cb3ff55a44913b0e03cbe0eefd67afafb478b6cb81563146 not found: ID does not exist" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.662528 5029 scope.go:117] "RemoveContainer" containerID="8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d" Mar 13 20:52:59 crc kubenswrapper[5029]: E0313 20:52:59.662783 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d\": container with ID starting with 8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d not found: ID does not exist" containerID="8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.662818 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d"} err="failed to get container status \"8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d\": rpc error: code = NotFound desc = could not find container \"8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d\": container with ID starting with 8b55d85018d5bff8530785d9887e0a7ae240485a898bf18f99904fc9cce1fb3d not found: ID does not exist" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.662837 5029 scope.go:117] "RemoveContainer" containerID="b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857" Mar 13 20:52:59 crc kubenswrapper[5029]: E0313 20:52:59.663091 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857\": container with ID starting with b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857 not found: ID does not exist" containerID="b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857" Mar 13 20:52:59 crc kubenswrapper[5029]: I0313 20:52:59.663115 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857"} err="failed to get container status \"b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857\": rpc error: code = NotFound desc = could not find container \"b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857\": container with ID starting with b4553f6f25d03d34f2171da062707784a1a71d063809c9e679a50c2715ce6857 not found: ID does not exist" Mar 13 20:53:00 crc kubenswrapper[5029]: I0313 20:53:00.612553 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" path="/var/lib/kubelet/pods/7f4af81c-9a89-400f-aaa5-fac0c444aacf/volumes" Mar 13 20:53:01 crc kubenswrapper[5029]: I0313 20:53:01.950579 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:53:01 crc kubenswrapper[5029]: I0313 20:53:01.951835 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.601952 5029 generic.go:334] "Generic (PLEG): container finished" podID="016118a1-8825-4373-a487-2fa17c45488a" containerID="e63b2d48cb795baddd2da68a783532d53775c5d80b3c7700de4f8fdde679face" exitCode=0 Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.606349 5029 generic.go:334] "Generic (PLEG): container finished" podID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" containerID="d260845f68e5a0c459e71dd0e85ae681987421511bd7bc9fb3a2e532dad36481" exitCode=0 Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.616000 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"016118a1-8825-4373-a487-2fa17c45488a","Type":"ContainerDied","Data":"e63b2d48cb795baddd2da68a783532d53775c5d80b3c7700de4f8fdde679face"} Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.616046 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ff0edef-42cf-4ba2-b170-87cfdd6deefb","Type":"ContainerDied","Data":"d260845f68e5a0c459e71dd0e85ae681987421511bd7bc9fb3a2e532dad36481"} Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.705350 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.788913 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-server-conf\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.788986 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-erlang-cookie-secret\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789012 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-erlang-cookie\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789090 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-confd\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789160 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-plugins-conf\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789211 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789366 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-tls\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789392 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-pod-info\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789410 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt9fc\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-kube-api-access-jt9fc\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789428 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-plugins\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.789498 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-config-data\") pod \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\" (UID: \"7ff0edef-42cf-4ba2-b170-87cfdd6deefb\") " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.791807 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.798948 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.800631 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.800726 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.800923 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-pod-info" (OuterVolumeSpecName: "pod-info") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.803397 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.805389 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-kube-api-access-jt9fc" (OuterVolumeSpecName: "kube-api-access-jt9fc") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "kube-api-access-jt9fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.810870 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.869461 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-config-data" (OuterVolumeSpecName: "config-data") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.885131 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.893758 5029 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.893813 5029 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.893826 5029 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.894434 5029 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.894459 5029 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.894469 5029 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-pod-info\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.894480 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt9fc\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-kube-api-access-jt9fc\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.894489 5029 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.894498 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:04 crc kubenswrapper[5029]: I0313 20:53:04.906125 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-server-conf" (OuterVolumeSpecName: "server-conf") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.005631 5029 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.002823 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmsvn\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-kube-api-access-kmsvn\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.013616 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-config-data\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.013877 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/016118a1-8825-4373-a487-2fa17c45488a-pod-info\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.014098 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-plugins-conf\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.014204 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-confd\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.030897 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-kube-api-access-kmsvn" (OuterVolumeSpecName: "kube-api-access-kmsvn") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "kube-api-access-kmsvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.046790 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-server-conf\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.046967 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-tls\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.047011 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-plugins\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.047202 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-erlang-cookie\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.047293 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.047340 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/016118a1-8825-4373-a487-2fa17c45488a-erlang-cookie-secret\") pod \"016118a1-8825-4373-a487-2fa17c45488a\" (UID: \"016118a1-8825-4373-a487-2fa17c45488a\") " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.048983 5029 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.049041 5029 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-server-conf\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.049058 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmsvn\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-kube-api-access-kmsvn\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.034249 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/016118a1-8825-4373-a487-2fa17c45488a-pod-info" (OuterVolumeSpecName: "pod-info") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.049387 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.050002 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.050112 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.076649 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.076882 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016118a1-8825-4373-a487-2fa17c45488a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.091341 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7ff0edef-42cf-4ba2-b170-87cfdd6deefb" (UID: "7ff0edef-42cf-4ba2-b170-87cfdd6deefb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.092403 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.107225 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-config-data" (OuterVolumeSpecName: "config-data") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149761 5029 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149821 5029 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149836 5029 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/016118a1-8825-4373-a487-2fa17c45488a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149935 5029 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ff0edef-42cf-4ba2-b170-87cfdd6deefb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149948 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149957 5029 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/016118a1-8825-4373-a487-2fa17c45488a-pod-info\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149965 5029 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149974 5029 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.149982 5029 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.170826 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-server-conf" (OuterVolumeSpecName: "server-conf") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.172013 5029 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.200026 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "016118a1-8825-4373-a487-2fa17c45488a" (UID: "016118a1-8825-4373-a487-2fa17c45488a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.252543 5029 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/016118a1-8825-4373-a487-2fa17c45488a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.252618 5029 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/016118a1-8825-4373-a487-2fa17c45488a-server-conf\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.252636 5029 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.621285 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"016118a1-8825-4373-a487-2fa17c45488a","Type":"ContainerDied","Data":"784fb70e9547df5c0a14e75fb27176a7997c788eaca25a0d0009cbd82238d0be"} Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.621376 5029 scope.go:117] "RemoveContainer" containerID="e63b2d48cb795baddd2da68a783532d53775c5d80b3c7700de4f8fdde679face" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.621384 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.626998 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ff0edef-42cf-4ba2-b170-87cfdd6deefb","Type":"ContainerDied","Data":"47f73fff5aa67dbf2db4fa386878b798427d8b02884e7bfdb049f2d525bc9c0a"} Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.627590 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.648047 5029 scope.go:117] "RemoveContainer" containerID="69b8d86fa5c0171e8ea41bc86941b1160f2a6de1cd11c89e37ba71b2ab3e9d1b" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.682558 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.743086 5029 scope.go:117] "RemoveContainer" containerID="d260845f68e5a0c459e71dd0e85ae681987421511bd7bc9fb3a2e532dad36481" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.746732 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.769590 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.785149 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.799794 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:53:05 crc kubenswrapper[5029]: E0313 20:53:05.800726 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="registry-server" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.800751 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="registry-server" Mar 13 20:53:05 crc kubenswrapper[5029]: E0313 20:53:05.800794 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" containerName="setup-container" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.800802 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" containerName="setup-container" Mar 13 20:53:05 crc kubenswrapper[5029]: E0313 20:53:05.800813 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016118a1-8825-4373-a487-2fa17c45488a" containerName="setup-container" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.800820 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="016118a1-8825-4373-a487-2fa17c45488a" containerName="setup-container" Mar 13 20:53:05 crc kubenswrapper[5029]: E0313 20:53:05.800832 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016118a1-8825-4373-a487-2fa17c45488a" containerName="rabbitmq" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.800839 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="016118a1-8825-4373-a487-2fa17c45488a" containerName="rabbitmq" Mar 13 20:53:05 crc kubenswrapper[5029]: E0313 20:53:05.800864 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="extract-utilities" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.800875 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="extract-utilities" Mar 13 20:53:05 crc kubenswrapper[5029]: E0313 20:53:05.800894 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="extract-content" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.800901 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="extract-content" Mar 13 20:53:05 crc kubenswrapper[5029]: E0313 20:53:05.800918 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" containerName="rabbitmq" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.800924 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" containerName="rabbitmq" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.801156 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="016118a1-8825-4373-a487-2fa17c45488a" containerName="rabbitmq" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.801176 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4af81c-9a89-400f-aaa5-fac0c444aacf" containerName="registry-server" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.801188 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" containerName="rabbitmq" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.808469 5029 scope.go:117] "RemoveContainer" containerID="f101418f370ae7a45ed8ce6c68a911416c7a732eaaa28c1cf01622c29ce93a94" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.809585 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.817084 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rlc4s" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.817471 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.817450 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.818207 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.818368 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.818528 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.819019 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 13 20:53:05 crc kubenswrapper[5029]: E0313 20:53:05.822625 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016118a1_8825_4373_a487_2fa17c45488a.slice\": RecentStats: unable to find data in memory cache]" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.846823 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.848936 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.864789 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.866583 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.866695 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.866971 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.868670 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.869078 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.869104 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l76h4" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.869271 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.885281 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896125 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896177 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896208 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896288 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896323 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896345 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b09567a2-ae01-47b2-98be-4e4b9ee54a66-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896377 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896405 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896429 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8b6\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-kube-api-access-kw8b6\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896460 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b09567a2-ae01-47b2-98be-4e4b9ee54a66-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.896482 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.997685 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b09567a2-ae01-47b2-98be-4e4b9ee54a66-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.997736 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/473790b1-7b66-4983-89fa-22e81a350616-pod-info\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.997773 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.997795 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.997838 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.997896 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.997931 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8b6\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-kube-api-access-kw8b6\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.997975 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b09567a2-ae01-47b2-98be-4e4b9ee54a66-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998000 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998018 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998038 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998061 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998093 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998153 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-server-conf\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998182 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/473790b1-7b66-4983-89fa-22e81a350616-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998229 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvxpv\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-kube-api-access-pvxpv\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998259 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998279 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-config-data\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998294 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998314 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998354 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998371 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.998523 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.999183 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:05 crc kubenswrapper[5029]: I0313 20:53:05.999495 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.000053 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.000099 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.001136 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b09567a2-ae01-47b2-98be-4e4b9ee54a66-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.016167 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.028073 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b09567a2-ae01-47b2-98be-4e4b9ee54a66-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.028146 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b09567a2-ae01-47b2-98be-4e4b9ee54a66-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.031938 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.032176 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8b6\" (UniqueName: \"kubernetes.io/projected/b09567a2-ae01-47b2-98be-4e4b9ee54a66-kube-api-access-kw8b6\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.053119 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b09567a2-ae01-47b2-98be-4e4b9ee54a66\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.100983 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-server-conf\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.101040 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/473790b1-7b66-4983-89fa-22e81a350616-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.101107 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvxpv\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-kube-api-access-pvxpv\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.101150 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.101179 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-config-data\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.101203 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.101275 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.101324 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/473790b1-7b66-4983-89fa-22e81a350616-pod-info\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.101393 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.102002 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.102385 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.103350 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-server-conf\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.103508 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.104564 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.104907 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.105037 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.105171 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/473790b1-7b66-4983-89fa-22e81a350616-config-data\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.106645 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/473790b1-7b66-4983-89fa-22e81a350616-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.109955 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.115584 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/473790b1-7b66-4983-89fa-22e81a350616-pod-info\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.119464 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.125135 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvxpv\" (UniqueName: \"kubernetes.io/projected/473790b1-7b66-4983-89fa-22e81a350616-kube-api-access-pvxpv\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.149811 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"473790b1-7b66-4983-89fa-22e81a350616\") " pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.211555 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.225994 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.613985 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016118a1-8825-4373-a487-2fa17c45488a" path="/var/lib/kubelet/pods/016118a1-8825-4373-a487-2fa17c45488a/volumes" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.616808 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff0edef-42cf-4ba2-b170-87cfdd6deefb" path="/var/lib/kubelet/pods/7ff0edef-42cf-4ba2-b170-87cfdd6deefb/volumes" Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.807560 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 20:53:06 crc kubenswrapper[5029]: I0313 20:53:06.872264 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 20:53:06 crc kubenswrapper[5029]: W0313 20:53:06.876064 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod473790b1_7b66_4983_89fa_22e81a350616.slice/crio-0638eede592001d0a68b779650134b03f87314a04f2733934081bb3f0ef4f305 WatchSource:0}: Error finding container 0638eede592001d0a68b779650134b03f87314a04f2733934081bb3f0ef4f305: Status 404 returned error can't find the container with id 0638eede592001d0a68b779650134b03f87314a04f2733934081bb3f0ef4f305 Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.097517 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-jdwtf"] Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.100609 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.106321 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.117587 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-jdwtf"] Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.230316 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.230397 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-svc\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.230437 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.230654 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.230961 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.231331 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnmv\" (UniqueName: \"kubernetes.io/projected/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-kube-api-access-svnmv\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.231508 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-config\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.334400 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.334506 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-svc\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.334569 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.334626 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.334704 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.334778 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svnmv\" (UniqueName: \"kubernetes.io/projected/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-kube-api-access-svnmv\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.334832 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-config\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.335670 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.335701 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.335681 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-svc\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.336270 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.336361 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-config\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.336822 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.359042 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnmv\" (UniqueName: \"kubernetes.io/projected/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-kube-api-access-svnmv\") pod \"dnsmasq-dns-5559d4f67f-jdwtf\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.420086 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.662877 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"473790b1-7b66-4983-89fa-22e81a350616","Type":"ContainerStarted","Data":"0638eede592001d0a68b779650134b03f87314a04f2733934081bb3f0ef4f305"} Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.665077 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b09567a2-ae01-47b2-98be-4e4b9ee54a66","Type":"ContainerStarted","Data":"8ad8292816004dbca08bbcd1453d7afaca55f4943fff3f1ddd4b9f398848fadc"} Mar 13 20:53:07 crc kubenswrapper[5029]: I0313 20:53:07.920319 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-jdwtf"] Mar 13 20:53:08 crc kubenswrapper[5029]: W0313 20:53:08.075968 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eabaa8d_8bbe_4ae5_9b24_2d85a7ff4cfc.slice/crio-6d8ab22990ed4399c7e9ea1f7f559cc9fe175ebfdb660199f2bb52cae72f2b9d WatchSource:0}: Error finding container 6d8ab22990ed4399c7e9ea1f7f559cc9fe175ebfdb660199f2bb52cae72f2b9d: Status 404 returned error can't find the container with id 6d8ab22990ed4399c7e9ea1f7f559cc9fe175ebfdb660199f2bb52cae72f2b9d Mar 13 20:53:08 crc kubenswrapper[5029]: I0313 20:53:08.677081 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" event={"ID":"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc","Type":"ContainerStarted","Data":"6d8ab22990ed4399c7e9ea1f7f559cc9fe175ebfdb660199f2bb52cae72f2b9d"} Mar 13 20:53:09 crc kubenswrapper[5029]: I0313 20:53:09.688380 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"473790b1-7b66-4983-89fa-22e81a350616","Type":"ContainerStarted","Data":"30f2487658a75949eb4e9bc0abe56b4ef548345f71ea52eb966de2a600be0f24"} Mar 13 20:53:09 crc kubenswrapper[5029]: I0313 20:53:09.691218 5029 generic.go:334] "Generic (PLEG): container finished" podID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" containerID="4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e" exitCode=0 Mar 13 20:53:09 crc kubenswrapper[5029]: I0313 20:53:09.691321 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" event={"ID":"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc","Type":"ContainerDied","Data":"4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e"} Mar 13 20:53:09 crc kubenswrapper[5029]: I0313 20:53:09.693622 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b09567a2-ae01-47b2-98be-4e4b9ee54a66","Type":"ContainerStarted","Data":"2821ab8010454713b520afcc2641a760d555e2daaf1b714d69fd0142f6daeca5"} Mar 13 20:53:10 crc kubenswrapper[5029]: I0313 20:53:10.706590 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" event={"ID":"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc","Type":"ContainerStarted","Data":"aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4"} Mar 13 20:53:10 crc kubenswrapper[5029]: I0313 20:53:10.744066 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" podStartSLOduration=3.7440401960000003 podStartE2EDuration="3.744040196s" podCreationTimestamp="2026-03-13 20:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:53:10.732992365 +0000 UTC m=+1550.749074768" watchObservedRunningTime="2026-03-13 20:53:10.744040196 +0000 UTC m=+1550.760122599" Mar 13 20:53:11 crc kubenswrapper[5029]: I0313 20:53:11.718254 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.422133 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.526714 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-cmdg5"] Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.527714 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" podUID="0474bc88-da72-4731-85a5-bc2b32263a20" containerName="dnsmasq-dns" containerID="cri-o://d8150ecfe1ed35ac0395d3a00b463aa687ab2fe55fe423e61fafbf4f78c68cc3" gracePeriod=10 Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.803077 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d99fc9df9-t7swp"] Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.813835 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.828422 5029 generic.go:334] "Generic (PLEG): container finished" podID="0474bc88-da72-4731-85a5-bc2b32263a20" containerID="d8150ecfe1ed35ac0395d3a00b463aa687ab2fe55fe423e61fafbf4f78c68cc3" exitCode=0 Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.828510 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" event={"ID":"0474bc88-da72-4731-85a5-bc2b32263a20","Type":"ContainerDied","Data":"d8150ecfe1ed35ac0395d3a00b463aa687ab2fe55fe423e61fafbf4f78c68cc3"} Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.878326 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d99fc9df9-t7swp"] Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.940552 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-dns-swift-storage-0\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.940628 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-dns-svc\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.940676 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-ovsdbserver-sb\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.940698 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.940713 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-ovsdbserver-nb\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.940876 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-config\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:17 crc kubenswrapper[5029]: I0313 20:53:17.940932 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfl5r\" (UniqueName: \"kubernetes.io/projected/f47111bc-9b36-4714-b62d-cb3910f2445b-kube-api-access-hfl5r\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.043095 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfl5r\" (UniqueName: \"kubernetes.io/projected/f47111bc-9b36-4714-b62d-cb3910f2445b-kube-api-access-hfl5r\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.043200 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-dns-swift-storage-0\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.043227 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-dns-svc\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.043280 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-ovsdbserver-sb\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.043302 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.043320 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-ovsdbserver-nb\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.043417 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-config\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.049755 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-dns-swift-storage-0\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.050476 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-config\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.051891 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-ovsdbserver-nb\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.052870 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.056520 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-ovsdbserver-sb\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.059799 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47111bc-9b36-4714-b62d-cb3910f2445b-dns-svc\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.088789 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfl5r\" (UniqueName: \"kubernetes.io/projected/f47111bc-9b36-4714-b62d-cb3910f2445b-kube-api-access-hfl5r\") pod \"dnsmasq-dns-5d99fc9df9-t7swp\" (UID: \"f47111bc-9b36-4714-b62d-cb3910f2445b\") " pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.151392 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.282269 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.349440 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw8fx\" (UniqueName: \"kubernetes.io/projected/0474bc88-da72-4731-85a5-bc2b32263a20-kube-api-access-xw8fx\") pod \"0474bc88-da72-4731-85a5-bc2b32263a20\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.349604 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-nb\") pod \"0474bc88-da72-4731-85a5-bc2b32263a20\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.349812 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-swift-storage-0\") pod \"0474bc88-da72-4731-85a5-bc2b32263a20\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.349952 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-config\") pod \"0474bc88-da72-4731-85a5-bc2b32263a20\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.350553 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-sb\") pod \"0474bc88-da72-4731-85a5-bc2b32263a20\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.350581 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-svc\") pod \"0474bc88-da72-4731-85a5-bc2b32263a20\" (UID: \"0474bc88-da72-4731-85a5-bc2b32263a20\") " Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.360980 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0474bc88-da72-4731-85a5-bc2b32263a20-kube-api-access-xw8fx" (OuterVolumeSpecName: "kube-api-access-xw8fx") pod "0474bc88-da72-4731-85a5-bc2b32263a20" (UID: "0474bc88-da72-4731-85a5-bc2b32263a20"). InnerVolumeSpecName "kube-api-access-xw8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.434228 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0474bc88-da72-4731-85a5-bc2b32263a20" (UID: "0474bc88-da72-4731-85a5-bc2b32263a20"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.438883 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0474bc88-da72-4731-85a5-bc2b32263a20" (UID: "0474bc88-da72-4731-85a5-bc2b32263a20"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.454324 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.454406 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.454422 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw8fx\" (UniqueName: \"kubernetes.io/projected/0474bc88-da72-4731-85a5-bc2b32263a20-kube-api-access-xw8fx\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.456620 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0474bc88-da72-4731-85a5-bc2b32263a20" (UID: "0474bc88-da72-4731-85a5-bc2b32263a20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.456686 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0474bc88-da72-4731-85a5-bc2b32263a20" (UID: "0474bc88-da72-4731-85a5-bc2b32263a20"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.476625 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-config" (OuterVolumeSpecName: "config") pod "0474bc88-da72-4731-85a5-bc2b32263a20" (UID: "0474bc88-da72-4731-85a5-bc2b32263a20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.557175 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.557229 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.557242 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0474bc88-da72-4731-85a5-bc2b32263a20-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.717680 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d99fc9df9-t7swp"] Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.840604 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" event={"ID":"0474bc88-da72-4731-85a5-bc2b32263a20","Type":"ContainerDied","Data":"00f2a14f365f2a62c5f8456be4a2ae25741b8d809c8549066a65abc66fe37036"} Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.840669 5029 scope.go:117] "RemoveContainer" containerID="d8150ecfe1ed35ac0395d3a00b463aa687ab2fe55fe423e61fafbf4f78c68cc3" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.840819 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c997d87-cmdg5" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.844095 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" event={"ID":"f47111bc-9b36-4714-b62d-cb3910f2445b","Type":"ContainerStarted","Data":"d19cedf84fb3c558d31700570bb4d20e735b33191629d626ec155300f54f9b4e"} Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.923946 5029 scope.go:117] "RemoveContainer" containerID="329805cef154718ed329ebc7408a17f56118c8e2e3b502cdcbe2d476ce94279f" Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.963050 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-cmdg5"] Mar 13 20:53:18 crc kubenswrapper[5029]: I0313 20:53:18.974439 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-cmdg5"] Mar 13 20:53:19 crc kubenswrapper[5029]: I0313 20:53:19.859446 5029 generic.go:334] "Generic (PLEG): container finished" podID="f47111bc-9b36-4714-b62d-cb3910f2445b" containerID="90a7978de24f928d0a4664eccd5ea82407e291b5a1649e21ec05a0db6e423a00" exitCode=0 Mar 13 20:53:19 crc kubenswrapper[5029]: I0313 20:53:19.859504 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" event={"ID":"f47111bc-9b36-4714-b62d-cb3910f2445b","Type":"ContainerDied","Data":"90a7978de24f928d0a4664eccd5ea82407e291b5a1649e21ec05a0db6e423a00"} Mar 13 20:53:20 crc kubenswrapper[5029]: I0313 20:53:20.621280 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0474bc88-da72-4731-85a5-bc2b32263a20" path="/var/lib/kubelet/pods/0474bc88-da72-4731-85a5-bc2b32263a20/volumes" Mar 13 20:53:20 crc kubenswrapper[5029]: I0313 20:53:20.894890 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" event={"ID":"f47111bc-9b36-4714-b62d-cb3910f2445b","Type":"ContainerStarted","Data":"a64e9d0a0fd4cfb33e21f5ac7b09a1f73167446c6f588a2ed0f84c11509a1f5d"} Mar 13 20:53:20 crc kubenswrapper[5029]: I0313 20:53:20.895506 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:20 crc kubenswrapper[5029]: I0313 20:53:20.932138 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" podStartSLOduration=3.932115282 podStartE2EDuration="3.932115282s" podCreationTimestamp="2026-03-13 20:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:53:20.924029291 +0000 UTC m=+1560.940111714" watchObservedRunningTime="2026-03-13 20:53:20.932115282 +0000 UTC m=+1560.948197685" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.154236 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d99fc9df9-t7swp" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.232649 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-jdwtf"] Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.233442 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" podUID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" containerName="dnsmasq-dns" containerID="cri-o://aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4" gracePeriod=10 Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.776186 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.906126 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-swift-storage-0\") pod \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.906438 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svnmv\" (UniqueName: \"kubernetes.io/projected/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-kube-api-access-svnmv\") pod \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.906559 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-sb\") pod \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.906719 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-openstack-edpm-ipam\") pod \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.906765 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-config\") pod \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.906797 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-svc\") pod \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.906863 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-nb\") pod \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\" (UID: \"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc\") " Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.915017 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-kube-api-access-svnmv" (OuterVolumeSpecName: "kube-api-access-svnmv") pod "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" (UID: "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc"). InnerVolumeSpecName "kube-api-access-svnmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.975707 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-config" (OuterVolumeSpecName: "config") pod "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" (UID: "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.978506 5029 generic.go:334] "Generic (PLEG): container finished" podID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" containerID="aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4" exitCode=0 Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.978612 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" event={"ID":"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc","Type":"ContainerDied","Data":"aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4"} Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.978653 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" event={"ID":"8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc","Type":"ContainerDied","Data":"6d8ab22990ed4399c7e9ea1f7f559cc9fe175ebfdb660199f2bb52cae72f2b9d"} Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.978674 5029 scope.go:117] "RemoveContainer" containerID="aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.978843 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5559d4f67f-jdwtf" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.978980 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" (UID: "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.983796 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" (UID: "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:28 crc kubenswrapper[5029]: I0313 20:53:28.994106 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" (UID: "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.001028 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" (UID: "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.003008 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" (UID: "8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.010191 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svnmv\" (UniqueName: \"kubernetes.io/projected/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-kube-api-access-svnmv\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.010236 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.010247 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.010259 5029 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-config\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.010270 5029 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.010280 5029 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.010288 5029 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.075148 5029 scope.go:117] "RemoveContainer" containerID="4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.100941 5029 scope.go:117] "RemoveContainer" containerID="aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4" Mar 13 20:53:29 crc kubenswrapper[5029]: E0313 20:53:29.101467 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4\": container with ID starting with aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4 not found: ID does not exist" containerID="aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.101525 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4"} err="failed to get container status \"aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4\": rpc error: code = NotFound desc = could not find container \"aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4\": container with ID starting with aaa762f63d519b7783daf24fd1edd68aa52280f39e8f9eb65b0da0e3122418e4 not found: ID does not exist" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.101562 5029 scope.go:117] "RemoveContainer" containerID="4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e" Mar 13 20:53:29 crc kubenswrapper[5029]: E0313 20:53:29.102106 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e\": container with ID starting with 4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e not found: ID does not exist" containerID="4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.102147 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e"} err="failed to get container status \"4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e\": rpc error: code = NotFound desc = could not find container \"4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e\": container with ID starting with 4aa6f6322934f840f25412d1957d69fa262e065ac34b139250939d526d1e3b4e not found: ID does not exist" Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.313843 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-jdwtf"] Mar 13 20:53:29 crc kubenswrapper[5029]: I0313 20:53:29.323685 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-jdwtf"] Mar 13 20:53:30 crc kubenswrapper[5029]: I0313 20:53:30.622082 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" path="/var/lib/kubelet/pods/8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc/volumes" Mar 13 20:53:31 crc kubenswrapper[5029]: I0313 20:53:31.951195 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:53:31 crc kubenswrapper[5029]: I0313 20:53:31.952016 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:53:31 crc kubenswrapper[5029]: I0313 20:53:31.952904 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:53:31 crc kubenswrapper[5029]: I0313 20:53:31.953888 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42ae9c192c95047ca08bd80103ba761f255a1bb01b61e6cc285f78d6d6c0169b"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 20:53:31 crc kubenswrapper[5029]: I0313 20:53:31.953961 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://42ae9c192c95047ca08bd80103ba761f255a1bb01b61e6cc285f78d6d6c0169b" gracePeriod=600 Mar 13 20:53:33 crc kubenswrapper[5029]: I0313 20:53:33.029604 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="42ae9c192c95047ca08bd80103ba761f255a1bb01b61e6cc285f78d6d6c0169b" exitCode=0 Mar 13 20:53:33 crc kubenswrapper[5029]: I0313 20:53:33.029683 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"42ae9c192c95047ca08bd80103ba761f255a1bb01b61e6cc285f78d6d6c0169b"} Mar 13 20:53:33 crc kubenswrapper[5029]: I0313 20:53:33.030433 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1"} Mar 13 20:53:33 crc kubenswrapper[5029]: I0313 20:53:33.030468 5029 scope.go:117] "RemoveContainer" containerID="fc08a3f0bf62f626b96edf0adf5dbb9a0493ba7c49c9be50ad8bce4dd83f3787" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.535739 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv"] Mar 13 20:53:40 crc kubenswrapper[5029]: E0313 20:53:40.537243 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" containerName="init" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.537259 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" containerName="init" Mar 13 20:53:40 crc kubenswrapper[5029]: E0313 20:53:40.537278 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0474bc88-da72-4731-85a5-bc2b32263a20" containerName="init" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.537285 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="0474bc88-da72-4731-85a5-bc2b32263a20" containerName="init" Mar 13 20:53:40 crc kubenswrapper[5029]: E0313 20:53:40.537311 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" containerName="dnsmasq-dns" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.537318 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" containerName="dnsmasq-dns" Mar 13 20:53:40 crc kubenswrapper[5029]: E0313 20:53:40.537335 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0474bc88-da72-4731-85a5-bc2b32263a20" containerName="dnsmasq-dns" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.537342 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="0474bc88-da72-4731-85a5-bc2b32263a20" containerName="dnsmasq-dns" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.537559 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="0474bc88-da72-4731-85a5-bc2b32263a20" containerName="dnsmasq-dns" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.537577 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eabaa8d-8bbe-4ae5-9b24-2d85a7ff4cfc" containerName="dnsmasq-dns" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.538369 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.548462 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv"] Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.564302 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.564600 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.564748 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.564889 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.712631 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hvf6\" (UniqueName: \"kubernetes.io/projected/7914fbef-d24e-4d69-aa5d-1bec7c231341-kube-api-access-2hvf6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.713195 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.713416 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.713680 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.815825 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hvf6\" (UniqueName: \"kubernetes.io/projected/7914fbef-d24e-4d69-aa5d-1bec7c231341-kube-api-access-2hvf6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.815960 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.816129 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.816328 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.827052 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.832380 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.833584 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.836733 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hvf6\" (UniqueName: \"kubernetes.io/projected/7914fbef-d24e-4d69-aa5d-1bec7c231341-kube-api-access-2hvf6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-56klv\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:40 crc kubenswrapper[5029]: I0313 20:53:40.922039 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:53:41 crc kubenswrapper[5029]: I0313 20:53:41.497001 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv"] Mar 13 20:53:41 crc kubenswrapper[5029]: W0313 20:53:41.500945 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7914fbef_d24e_4d69_aa5d_1bec7c231341.slice/crio-dd8b9136ee92c3560a10f7084813730ad6fa57125bc26c44ccd5e09c181569dc WatchSource:0}: Error finding container dd8b9136ee92c3560a10f7084813730ad6fa57125bc26c44ccd5e09c181569dc: Status 404 returned error can't find the container with id dd8b9136ee92c3560a10f7084813730ad6fa57125bc26c44ccd5e09c181569dc Mar 13 20:53:41 crc kubenswrapper[5029]: I0313 20:53:41.504114 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 20:53:42 crc kubenswrapper[5029]: I0313 20:53:42.140114 5029 generic.go:334] "Generic (PLEG): container finished" podID="473790b1-7b66-4983-89fa-22e81a350616" containerID="30f2487658a75949eb4e9bc0abe56b4ef548345f71ea52eb966de2a600be0f24" exitCode=0 Mar 13 20:53:42 crc kubenswrapper[5029]: I0313 20:53:42.140222 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"473790b1-7b66-4983-89fa-22e81a350616","Type":"ContainerDied","Data":"30f2487658a75949eb4e9bc0abe56b4ef548345f71ea52eb966de2a600be0f24"} Mar 13 20:53:42 crc kubenswrapper[5029]: I0313 20:53:42.143529 5029 generic.go:334] "Generic (PLEG): container finished" podID="b09567a2-ae01-47b2-98be-4e4b9ee54a66" containerID="2821ab8010454713b520afcc2641a760d555e2daaf1b714d69fd0142f6daeca5" exitCode=0 Mar 13 20:53:42 crc kubenswrapper[5029]: I0313 20:53:42.143606 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b09567a2-ae01-47b2-98be-4e4b9ee54a66","Type":"ContainerDied","Data":"2821ab8010454713b520afcc2641a760d555e2daaf1b714d69fd0142f6daeca5"} Mar 13 20:53:42 crc kubenswrapper[5029]: I0313 20:53:42.146904 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" event={"ID":"7914fbef-d24e-4d69-aa5d-1bec7c231341","Type":"ContainerStarted","Data":"dd8b9136ee92c3560a10f7084813730ad6fa57125bc26c44ccd5e09c181569dc"} Mar 13 20:53:43 crc kubenswrapper[5029]: I0313 20:53:43.161137 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"473790b1-7b66-4983-89fa-22e81a350616","Type":"ContainerStarted","Data":"b161c36919cb2959cc8db9d513a0fca69a993eec275ceb8cf475f3ae349fbdf6"} Mar 13 20:53:43 crc kubenswrapper[5029]: I0313 20:53:43.162749 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 13 20:53:43 crc kubenswrapper[5029]: I0313 20:53:43.168604 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b09567a2-ae01-47b2-98be-4e4b9ee54a66","Type":"ContainerStarted","Data":"32a1d14ed0478569f1ca095bb4d4dfd446b7728fbf3559560ec607d9bf70f84a"} Mar 13 20:53:43 crc kubenswrapper[5029]: I0313 20:53:43.169229 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:43 crc kubenswrapper[5029]: I0313 20:53:43.194621 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.194603001 podStartE2EDuration="38.194603001s" podCreationTimestamp="2026-03-13 20:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:53:43.190218231 +0000 UTC m=+1583.206300634" watchObservedRunningTime="2026-03-13 20:53:43.194603001 +0000 UTC m=+1583.210685404" Mar 13 20:53:43 crc kubenswrapper[5029]: I0313 20:53:43.225461 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.225441181 podStartE2EDuration="38.225441181s" podCreationTimestamp="2026-03-13 20:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 20:53:43.218167163 +0000 UTC m=+1583.234249566" watchObservedRunningTime="2026-03-13 20:53:43.225441181 +0000 UTC m=+1583.241523584" Mar 13 20:53:52 crc kubenswrapper[5029]: I0313 20:53:52.299195 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" event={"ID":"7914fbef-d24e-4d69-aa5d-1bec7c231341","Type":"ContainerStarted","Data":"0780c8e17a2899a80b8ee1dbdbb7f478e3f93283dcbc0a11df547d4a71967069"} Mar 13 20:53:52 crc kubenswrapper[5029]: I0313 20:53:52.332725 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" podStartSLOduration=2.610890348 podStartE2EDuration="12.332702264s" podCreationTimestamp="2026-03-13 20:53:40 +0000 UTC" firstStartedPulling="2026-03-13 20:53:41.5038394 +0000 UTC m=+1581.519921803" lastFinishedPulling="2026-03-13 20:53:51.225651326 +0000 UTC m=+1591.241733719" observedRunningTime="2026-03-13 20:53:52.324064609 +0000 UTC m=+1592.340147012" watchObservedRunningTime="2026-03-13 20:53:52.332702264 +0000 UTC m=+1592.348784667" Mar 13 20:53:56 crc kubenswrapper[5029]: I0313 20:53:56.217065 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 13 20:53:56 crc kubenswrapper[5029]: I0313 20:53:56.229131 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.157212 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557254-psclf"] Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.159390 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557254-psclf" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.163643 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.165110 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.165226 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.167081 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557254-psclf"] Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.224385 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzcr5\" (UniqueName: \"kubernetes.io/projected/3a30d71f-d681-4dce-b39f-4e0304fc1a95-kube-api-access-wzcr5\") pod \"auto-csr-approver-29557254-psclf\" (UID: \"3a30d71f-d681-4dce-b39f-4e0304fc1a95\") " pod="openshift-infra/auto-csr-approver-29557254-psclf" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.328207 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzcr5\" (UniqueName: \"kubernetes.io/projected/3a30d71f-d681-4dce-b39f-4e0304fc1a95-kube-api-access-wzcr5\") pod \"auto-csr-approver-29557254-psclf\" (UID: \"3a30d71f-d681-4dce-b39f-4e0304fc1a95\") " pod="openshift-infra/auto-csr-approver-29557254-psclf" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.357666 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzcr5\" (UniqueName: \"kubernetes.io/projected/3a30d71f-d681-4dce-b39f-4e0304fc1a95-kube-api-access-wzcr5\") pod \"auto-csr-approver-29557254-psclf\" (UID: \"3a30d71f-d681-4dce-b39f-4e0304fc1a95\") " pod="openshift-infra/auto-csr-approver-29557254-psclf" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.481300 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557254-psclf" Mar 13 20:54:00 crc kubenswrapper[5029]: I0313 20:54:00.980016 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557254-psclf"] Mar 13 20:54:00 crc kubenswrapper[5029]: W0313 20:54:00.985009 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a30d71f_d681_4dce_b39f_4e0304fc1a95.slice/crio-17aab6184db5571ca420db2aff50e6cfc0ba87619fa26c9bd97fd65e5cf9fa0a WatchSource:0}: Error finding container 17aab6184db5571ca420db2aff50e6cfc0ba87619fa26c9bd97fd65e5cf9fa0a: Status 404 returned error can't find the container with id 17aab6184db5571ca420db2aff50e6cfc0ba87619fa26c9bd97fd65e5cf9fa0a Mar 13 20:54:01 crc kubenswrapper[5029]: I0313 20:54:01.402519 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557254-psclf" event={"ID":"3a30d71f-d681-4dce-b39f-4e0304fc1a95","Type":"ContainerStarted","Data":"17aab6184db5571ca420db2aff50e6cfc0ba87619fa26c9bd97fd65e5cf9fa0a"} Mar 13 20:54:03 crc kubenswrapper[5029]: I0313 20:54:03.428362 5029 generic.go:334] "Generic (PLEG): container finished" podID="7914fbef-d24e-4d69-aa5d-1bec7c231341" containerID="0780c8e17a2899a80b8ee1dbdbb7f478e3f93283dcbc0a11df547d4a71967069" exitCode=0 Mar 13 20:54:03 crc kubenswrapper[5029]: I0313 20:54:03.428480 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" event={"ID":"7914fbef-d24e-4d69-aa5d-1bec7c231341","Type":"ContainerDied","Data":"0780c8e17a2899a80b8ee1dbdbb7f478e3f93283dcbc0a11df547d4a71967069"} Mar 13 20:54:03 crc kubenswrapper[5029]: I0313 20:54:03.432354 5029 generic.go:334] "Generic (PLEG): container finished" podID="3a30d71f-d681-4dce-b39f-4e0304fc1a95" containerID="2518d599c9eafbf0c88c9e873be2a900214d81a987e1b52b189ec9503eb66d7b" exitCode=0 Mar 13 20:54:03 crc kubenswrapper[5029]: I0313 20:54:03.432417 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557254-psclf" event={"ID":"3a30d71f-d681-4dce-b39f-4e0304fc1a95","Type":"ContainerDied","Data":"2518d599c9eafbf0c88c9e873be2a900214d81a987e1b52b189ec9503eb66d7b"} Mar 13 20:54:04 crc kubenswrapper[5029]: I0313 20:54:04.973165 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557254-psclf" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.047778 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzcr5\" (UniqueName: \"kubernetes.io/projected/3a30d71f-d681-4dce-b39f-4e0304fc1a95-kube-api-access-wzcr5\") pod \"3a30d71f-d681-4dce-b39f-4e0304fc1a95\" (UID: \"3a30d71f-d681-4dce-b39f-4e0304fc1a95\") " Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.056449 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a30d71f-d681-4dce-b39f-4e0304fc1a95-kube-api-access-wzcr5" (OuterVolumeSpecName: "kube-api-access-wzcr5") pod "3a30d71f-d681-4dce-b39f-4e0304fc1a95" (UID: "3a30d71f-d681-4dce-b39f-4e0304fc1a95"). InnerVolumeSpecName "kube-api-access-wzcr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.153731 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzcr5\" (UniqueName: \"kubernetes.io/projected/3a30d71f-d681-4dce-b39f-4e0304fc1a95-kube-api-access-wzcr5\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.167049 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.255626 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-repo-setup-combined-ca-bundle\") pod \"7914fbef-d24e-4d69-aa5d-1bec7c231341\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.255827 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-inventory\") pod \"7914fbef-d24e-4d69-aa5d-1bec7c231341\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.255920 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-ssh-key-openstack-edpm-ipam\") pod \"7914fbef-d24e-4d69-aa5d-1bec7c231341\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.256341 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hvf6\" (UniqueName: \"kubernetes.io/projected/7914fbef-d24e-4d69-aa5d-1bec7c231341-kube-api-access-2hvf6\") pod \"7914fbef-d24e-4d69-aa5d-1bec7c231341\" (UID: \"7914fbef-d24e-4d69-aa5d-1bec7c231341\") " Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.260739 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7914fbef-d24e-4d69-aa5d-1bec7c231341" (UID: "7914fbef-d24e-4d69-aa5d-1bec7c231341"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.264037 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7914fbef-d24e-4d69-aa5d-1bec7c231341-kube-api-access-2hvf6" (OuterVolumeSpecName: "kube-api-access-2hvf6") pod "7914fbef-d24e-4d69-aa5d-1bec7c231341" (UID: "7914fbef-d24e-4d69-aa5d-1bec7c231341"). InnerVolumeSpecName "kube-api-access-2hvf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.288696 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-inventory" (OuterVolumeSpecName: "inventory") pod "7914fbef-d24e-4d69-aa5d-1bec7c231341" (UID: "7914fbef-d24e-4d69-aa5d-1bec7c231341"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.291185 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7914fbef-d24e-4d69-aa5d-1bec7c231341" (UID: "7914fbef-d24e-4d69-aa5d-1bec7c231341"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.360592 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hvf6\" (UniqueName: \"kubernetes.io/projected/7914fbef-d24e-4d69-aa5d-1bec7c231341-kube-api-access-2hvf6\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.360663 5029 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.360691 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.360715 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7914fbef-d24e-4d69-aa5d-1bec7c231341-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.473186 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557254-psclf" event={"ID":"3a30d71f-d681-4dce-b39f-4e0304fc1a95","Type":"ContainerDied","Data":"17aab6184db5571ca420db2aff50e6cfc0ba87619fa26c9bd97fd65e5cf9fa0a"} Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.473251 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17aab6184db5571ca420db2aff50e6cfc0ba87619fa26c9bd97fd65e5cf9fa0a" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.473320 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557254-psclf" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.495335 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" event={"ID":"7914fbef-d24e-4d69-aa5d-1bec7c231341","Type":"ContainerDied","Data":"dd8b9136ee92c3560a10f7084813730ad6fa57125bc26c44ccd5e09c181569dc"} Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.495453 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd8b9136ee92c3560a10f7084813730ad6fa57125bc26c44ccd5e09c181569dc" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.496037 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-56klv" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.622372 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w"] Mar 13 20:54:05 crc kubenswrapper[5029]: E0313 20:54:05.623226 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7914fbef-d24e-4d69-aa5d-1bec7c231341" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.623252 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7914fbef-d24e-4d69-aa5d-1bec7c231341" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 13 20:54:05 crc kubenswrapper[5029]: E0313 20:54:05.623269 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a30d71f-d681-4dce-b39f-4e0304fc1a95" containerName="oc" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.623277 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a30d71f-d681-4dce-b39f-4e0304fc1a95" containerName="oc" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.623599 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a30d71f-d681-4dce-b39f-4e0304fc1a95" containerName="oc" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.623630 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7914fbef-d24e-4d69-aa5d-1bec7c231341" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.625035 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.634719 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.634948 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.635016 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.635316 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.647550 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w"] Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.668310 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.668734 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.668883 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-455v5\" (UniqueName: \"kubernetes.io/projected/ae031e82-8607-4f07-a080-d259c4dd17e2-kube-api-access-455v5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.771123 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.771752 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.771784 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-455v5\" (UniqueName: \"kubernetes.io/projected/ae031e82-8607-4f07-a080-d259c4dd17e2-kube-api-access-455v5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.782381 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.782831 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.794164 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-455v5\" (UniqueName: \"kubernetes.io/projected/ae031e82-8607-4f07-a080-d259c4dd17e2-kube-api-access-455v5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cf94w\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:05 crc kubenswrapper[5029]: I0313 20:54:05.945752 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:06 crc kubenswrapper[5029]: I0313 20:54:06.070007 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557248-5d2z7"] Mar 13 20:54:06 crc kubenswrapper[5029]: I0313 20:54:06.092960 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557248-5d2z7"] Mar 13 20:54:06 crc kubenswrapper[5029]: I0313 20:54:06.559502 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w"] Mar 13 20:54:06 crc kubenswrapper[5029]: W0313 20:54:06.565918 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae031e82_8607_4f07_a080_d259c4dd17e2.slice/crio-a7150295c679679168be6d54b5d9dd2624e062645f8a203f3dced104022095fe WatchSource:0}: Error finding container a7150295c679679168be6d54b5d9dd2624e062645f8a203f3dced104022095fe: Status 404 returned error can't find the container with id a7150295c679679168be6d54b5d9dd2624e062645f8a203f3dced104022095fe Mar 13 20:54:06 crc kubenswrapper[5029]: I0313 20:54:06.614202 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="508d75bf-4e85-49d1-b942-ebd7d8a63e51" path="/var/lib/kubelet/pods/508d75bf-4e85-49d1-b942-ebd7d8a63e51/volumes" Mar 13 20:54:07 crc kubenswrapper[5029]: I0313 20:54:07.530302 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" event={"ID":"ae031e82-8607-4f07-a080-d259c4dd17e2","Type":"ContainerStarted","Data":"a7150295c679679168be6d54b5d9dd2624e062645f8a203f3dced104022095fe"} Mar 13 20:54:08 crc kubenswrapper[5029]: I0313 20:54:08.529900 5029 scope.go:117] "RemoveContainer" containerID="f218aa80a96b66b0dcb404ade51868980754399c43b9cc4032a0c012366d07d8" Mar 13 20:54:08 crc kubenswrapper[5029]: I0313 20:54:08.547617 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" event={"ID":"ae031e82-8607-4f07-a080-d259c4dd17e2","Type":"ContainerStarted","Data":"53a2f4005b7a28182d44c3d15d87a245d4885e3bb68e0fd9bba075e51391ee73"} Mar 13 20:54:08 crc kubenswrapper[5029]: I0313 20:54:08.574798 5029 scope.go:117] "RemoveContainer" containerID="bf9166e5850834b7827d2f2cca353355cad3585d5a41b5d7498ef6b88e12fce9" Mar 13 20:54:08 crc kubenswrapper[5029]: I0313 20:54:08.581819 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" podStartSLOduration=2.607138091 podStartE2EDuration="3.58178812s" podCreationTimestamp="2026-03-13 20:54:05 +0000 UTC" firstStartedPulling="2026-03-13 20:54:06.571790279 +0000 UTC m=+1606.587872682" lastFinishedPulling="2026-03-13 20:54:07.546440308 +0000 UTC m=+1607.562522711" observedRunningTime="2026-03-13 20:54:08.574458551 +0000 UTC m=+1608.590540974" watchObservedRunningTime="2026-03-13 20:54:08.58178812 +0000 UTC m=+1608.597870533" Mar 13 20:54:08 crc kubenswrapper[5029]: I0313 20:54:08.666263 5029 scope.go:117] "RemoveContainer" containerID="983184384b28336161891a392548956c22d8f6d8f3212b2185a18d18739da541" Mar 13 20:54:10 crc kubenswrapper[5029]: I0313 20:54:10.575061 5029 generic.go:334] "Generic (PLEG): container finished" podID="ae031e82-8607-4f07-a080-d259c4dd17e2" containerID="53a2f4005b7a28182d44c3d15d87a245d4885e3bb68e0fd9bba075e51391ee73" exitCode=0 Mar 13 20:54:10 crc kubenswrapper[5029]: I0313 20:54:10.576092 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" event={"ID":"ae031e82-8607-4f07-a080-d259c4dd17e2","Type":"ContainerDied","Data":"53a2f4005b7a28182d44c3d15d87a245d4885e3bb68e0fd9bba075e51391ee73"} Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.336115 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jxb2x"] Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.338805 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.350448 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxb2x"] Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.467094 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-utilities\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.467478 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw2gp\" (UniqueName: \"kubernetes.io/projected/bc9de11d-e210-4190-8104-e2861f46f832-kube-api-access-vw2gp\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.467560 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-catalog-content\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.569606 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-catalog-content\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.570040 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-utilities\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.570149 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw2gp\" (UniqueName: \"kubernetes.io/projected/bc9de11d-e210-4190-8104-e2861f46f832-kube-api-access-vw2gp\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.570248 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-catalog-content\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.570626 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-utilities\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.593169 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw2gp\" (UniqueName: \"kubernetes.io/projected/bc9de11d-e210-4190-8104-e2861f46f832-kube-api-access-vw2gp\") pod \"community-operators-jxb2x\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:11 crc kubenswrapper[5029]: I0313 20:54:11.661055 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.221442 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.294897 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-inventory\") pod \"ae031e82-8607-4f07-a080-d259c4dd17e2\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.295330 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-ssh-key-openstack-edpm-ipam\") pod \"ae031e82-8607-4f07-a080-d259c4dd17e2\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.295471 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-455v5\" (UniqueName: \"kubernetes.io/projected/ae031e82-8607-4f07-a080-d259c4dd17e2-kube-api-access-455v5\") pod \"ae031e82-8607-4f07-a080-d259c4dd17e2\" (UID: \"ae031e82-8607-4f07-a080-d259c4dd17e2\") " Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.302028 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae031e82-8607-4f07-a080-d259c4dd17e2-kube-api-access-455v5" (OuterVolumeSpecName: "kube-api-access-455v5") pod "ae031e82-8607-4f07-a080-d259c4dd17e2" (UID: "ae031e82-8607-4f07-a080-d259c4dd17e2"). InnerVolumeSpecName "kube-api-access-455v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.334080 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-inventory" (OuterVolumeSpecName: "inventory") pod "ae031e82-8607-4f07-a080-d259c4dd17e2" (UID: "ae031e82-8607-4f07-a080-d259c4dd17e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.339189 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae031e82-8607-4f07-a080-d259c4dd17e2" (UID: "ae031e82-8607-4f07-a080-d259c4dd17e2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.346135 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxb2x"] Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.399211 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-455v5\" (UniqueName: \"kubernetes.io/projected/ae031e82-8607-4f07-a080-d259c4dd17e2-kube-api-access-455v5\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.399267 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.399287 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae031e82-8607-4f07-a080-d259c4dd17e2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.610650 5029 generic.go:334] "Generic (PLEG): container finished" podID="bc9de11d-e210-4190-8104-e2861f46f832" containerID="35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48" exitCode=0 Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.614342 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxb2x" event={"ID":"bc9de11d-e210-4190-8104-e2861f46f832","Type":"ContainerDied","Data":"35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48"} Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.614385 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxb2x" event={"ID":"bc9de11d-e210-4190-8104-e2861f46f832","Type":"ContainerStarted","Data":"a5cda940740115f4c2f2e6c77091759ee67631d09ef5213144c36becb5422927"} Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.620058 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" event={"ID":"ae031e82-8607-4f07-a080-d259c4dd17e2","Type":"ContainerDied","Data":"a7150295c679679168be6d54b5d9dd2624e062645f8a203f3dced104022095fe"} Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.620108 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7150295c679679168be6d54b5d9dd2624e062645f8a203f3dced104022095fe" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.620191 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cf94w" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.702218 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr"] Mar 13 20:54:12 crc kubenswrapper[5029]: E0313 20:54:12.703064 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae031e82-8607-4f07-a080-d259c4dd17e2" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.703097 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae031e82-8607-4f07-a080-d259c4dd17e2" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.703433 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae031e82-8607-4f07-a080-d259c4dd17e2" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.704543 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.714713 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.715074 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.715463 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.719213 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.731807 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr"] Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.810438 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp5bd\" (UniqueName: \"kubernetes.io/projected/0536889c-718f-4c69-a5ca-7428e7c351db-kube-api-access-jp5bd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.810972 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.811087 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.811117 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.913313 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp5bd\" (UniqueName: \"kubernetes.io/projected/0536889c-718f-4c69-a5ca-7428e7c351db-kube-api-access-jp5bd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.913381 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.913451 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.913485 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.920781 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.920809 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.922625 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:12 crc kubenswrapper[5029]: I0313 20:54:12.931704 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp5bd\" (UniqueName: \"kubernetes.io/projected/0536889c-718f-4c69-a5ca-7428e7c351db-kube-api-access-jp5bd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:13 crc kubenswrapper[5029]: I0313 20:54:13.034216 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:54:13 crc kubenswrapper[5029]: I0313 20:54:13.628602 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr"] Mar 13 20:54:13 crc kubenswrapper[5029]: W0313 20:54:13.641207 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0536889c_718f_4c69_a5ca_7428e7c351db.slice/crio-b1e0b704d46b6c634d6ea8e8f13e94af60cf161df8a10bd64d58ecedb5dd5cc7 WatchSource:0}: Error finding container b1e0b704d46b6c634d6ea8e8f13e94af60cf161df8a10bd64d58ecedb5dd5cc7: Status 404 returned error can't find the container with id b1e0b704d46b6c634d6ea8e8f13e94af60cf161df8a10bd64d58ecedb5dd5cc7 Mar 13 20:54:14 crc kubenswrapper[5029]: I0313 20:54:14.646600 5029 generic.go:334] "Generic (PLEG): container finished" podID="bc9de11d-e210-4190-8104-e2861f46f832" containerID="9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592" exitCode=0 Mar 13 20:54:14 crc kubenswrapper[5029]: I0313 20:54:14.646709 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxb2x" event={"ID":"bc9de11d-e210-4190-8104-e2861f46f832","Type":"ContainerDied","Data":"9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592"} Mar 13 20:54:14 crc kubenswrapper[5029]: I0313 20:54:14.651516 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" event={"ID":"0536889c-718f-4c69-a5ca-7428e7c351db","Type":"ContainerStarted","Data":"9482e78d644cc230e2d27c2cace24c8fb889fc36435dac831e6ac53acd7a060e"} Mar 13 20:54:14 crc kubenswrapper[5029]: I0313 20:54:14.651566 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" event={"ID":"0536889c-718f-4c69-a5ca-7428e7c351db","Type":"ContainerStarted","Data":"b1e0b704d46b6c634d6ea8e8f13e94af60cf161df8a10bd64d58ecedb5dd5cc7"} Mar 13 20:54:14 crc kubenswrapper[5029]: I0313 20:54:14.699186 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" podStartSLOduration=2.266872486 podStartE2EDuration="2.699158479s" podCreationTimestamp="2026-03-13 20:54:12 +0000 UTC" firstStartedPulling="2026-03-13 20:54:13.667227889 +0000 UTC m=+1613.683310292" lastFinishedPulling="2026-03-13 20:54:14.099513882 +0000 UTC m=+1614.115596285" observedRunningTime="2026-03-13 20:54:14.693263648 +0000 UTC m=+1614.709346061" watchObservedRunningTime="2026-03-13 20:54:14.699158479 +0000 UTC m=+1614.715240882" Mar 13 20:54:15 crc kubenswrapper[5029]: I0313 20:54:15.665286 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxb2x" event={"ID":"bc9de11d-e210-4190-8104-e2861f46f832","Type":"ContainerStarted","Data":"8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497"} Mar 13 20:54:15 crc kubenswrapper[5029]: I0313 20:54:15.684957 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jxb2x" podStartSLOduration=2.044585315 podStartE2EDuration="4.684939171s" podCreationTimestamp="2026-03-13 20:54:11 +0000 UTC" firstStartedPulling="2026-03-13 20:54:12.61375094 +0000 UTC m=+1612.629833333" lastFinishedPulling="2026-03-13 20:54:15.254104786 +0000 UTC m=+1615.270187189" observedRunningTime="2026-03-13 20:54:15.684696204 +0000 UTC m=+1615.700778607" watchObservedRunningTime="2026-03-13 20:54:15.684939171 +0000 UTC m=+1615.701021574" Mar 13 20:54:21 crc kubenswrapper[5029]: I0313 20:54:21.661842 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:21 crc kubenswrapper[5029]: I0313 20:54:21.663962 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:22 crc kubenswrapper[5029]: I0313 20:54:22.715684 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jxb2x" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="registry-server" probeResult="failure" output=< Mar 13 20:54:22 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 20:54:22 crc kubenswrapper[5029]: > Mar 13 20:54:31 crc kubenswrapper[5029]: I0313 20:54:31.727535 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:31 crc kubenswrapper[5029]: I0313 20:54:31.787837 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:31 crc kubenswrapper[5029]: I0313 20:54:31.977746 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxb2x"] Mar 13 20:54:32 crc kubenswrapper[5029]: I0313 20:54:32.876574 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jxb2x" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="registry-server" containerID="cri-o://8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497" gracePeriod=2 Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.400188 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.563818 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-utilities\") pod \"bc9de11d-e210-4190-8104-e2861f46f832\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.564099 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-catalog-content\") pod \"bc9de11d-e210-4190-8104-e2861f46f832\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.564173 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw2gp\" (UniqueName: \"kubernetes.io/projected/bc9de11d-e210-4190-8104-e2861f46f832-kube-api-access-vw2gp\") pod \"bc9de11d-e210-4190-8104-e2861f46f832\" (UID: \"bc9de11d-e210-4190-8104-e2861f46f832\") " Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.564940 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-utilities" (OuterVolumeSpecName: "utilities") pod "bc9de11d-e210-4190-8104-e2861f46f832" (UID: "bc9de11d-e210-4190-8104-e2861f46f832"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.565786 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.572330 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9de11d-e210-4190-8104-e2861f46f832-kube-api-access-vw2gp" (OuterVolumeSpecName: "kube-api-access-vw2gp") pod "bc9de11d-e210-4190-8104-e2861f46f832" (UID: "bc9de11d-e210-4190-8104-e2861f46f832"). InnerVolumeSpecName "kube-api-access-vw2gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.620663 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc9de11d-e210-4190-8104-e2861f46f832" (UID: "bc9de11d-e210-4190-8104-e2861f46f832"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.668201 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9de11d-e210-4190-8104-e2861f46f832-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.668257 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw2gp\" (UniqueName: \"kubernetes.io/projected/bc9de11d-e210-4190-8104-e2861f46f832-kube-api-access-vw2gp\") on node \"crc\" DevicePath \"\"" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.892375 5029 generic.go:334] "Generic (PLEG): container finished" podID="bc9de11d-e210-4190-8104-e2861f46f832" containerID="8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497" exitCode=0 Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.892455 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxb2x" event={"ID":"bc9de11d-e210-4190-8104-e2861f46f832","Type":"ContainerDied","Data":"8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497"} Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.892505 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxb2x" event={"ID":"bc9de11d-e210-4190-8104-e2861f46f832","Type":"ContainerDied","Data":"a5cda940740115f4c2f2e6c77091759ee67631d09ef5213144c36becb5422927"} Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.892538 5029 scope.go:117] "RemoveContainer" containerID="8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.892829 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxb2x" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.932761 5029 scope.go:117] "RemoveContainer" containerID="9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592" Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.938397 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxb2x"] Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.950984 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jxb2x"] Mar 13 20:54:33 crc kubenswrapper[5029]: I0313 20:54:33.966703 5029 scope.go:117] "RemoveContainer" containerID="35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48" Mar 13 20:54:34 crc kubenswrapper[5029]: I0313 20:54:34.017492 5029 scope.go:117] "RemoveContainer" containerID="8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497" Mar 13 20:54:34 crc kubenswrapper[5029]: E0313 20:54:34.019728 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497\": container with ID starting with 8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497 not found: ID does not exist" containerID="8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497" Mar 13 20:54:34 crc kubenswrapper[5029]: I0313 20:54:34.019785 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497"} err="failed to get container status \"8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497\": rpc error: code = NotFound desc = could not find container \"8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497\": container with ID starting with 8724984c884f96c1637141c62a6ebdebf56259022201dd6084f80ed4f3f88497 not found: ID does not exist" Mar 13 20:54:34 crc kubenswrapper[5029]: I0313 20:54:34.019819 5029 scope.go:117] "RemoveContainer" containerID="9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592" Mar 13 20:54:34 crc kubenswrapper[5029]: E0313 20:54:34.020539 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592\": container with ID starting with 9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592 not found: ID does not exist" containerID="9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592" Mar 13 20:54:34 crc kubenswrapper[5029]: I0313 20:54:34.020588 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592"} err="failed to get container status \"9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592\": rpc error: code = NotFound desc = could not find container \"9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592\": container with ID starting with 9dc77bac5d7675ede222284087bb035fb4e85282c7835a5d8be3e07e32b9a592 not found: ID does not exist" Mar 13 20:54:34 crc kubenswrapper[5029]: I0313 20:54:34.020625 5029 scope.go:117] "RemoveContainer" containerID="35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48" Mar 13 20:54:34 crc kubenswrapper[5029]: E0313 20:54:34.021309 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48\": container with ID starting with 35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48 not found: ID does not exist" containerID="35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48" Mar 13 20:54:34 crc kubenswrapper[5029]: I0313 20:54:34.021463 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48"} err="failed to get container status \"35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48\": rpc error: code = NotFound desc = could not find container \"35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48\": container with ID starting with 35a1127cc23493db129f218c6d8213190b0f4bf976f1be1206534cb7236cbb48 not found: ID does not exist" Mar 13 20:54:34 crc kubenswrapper[5029]: I0313 20:54:34.615364 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9de11d-e210-4190-8104-e2861f46f832" path="/var/lib/kubelet/pods/bc9de11d-e210-4190-8104-e2861f46f832/volumes" Mar 13 20:54:42 crc kubenswrapper[5029]: I0313 20:54:42.709620 5029 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podae031e82-8607-4f07-a080-d259c4dd17e2"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podae031e82-8607-4f07-a080-d259c4dd17e2] : Timed out while waiting for systemd to remove kubepods-besteffort-podae031e82_8607_4f07_a080_d259c4dd17e2.slice" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.623697 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r4sgr"] Mar 13 20:55:02 crc kubenswrapper[5029]: E0313 20:55:02.625039 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="extract-content" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.625058 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="extract-content" Mar 13 20:55:02 crc kubenswrapper[5029]: E0313 20:55:02.625070 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="registry-server" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.625077 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="registry-server" Mar 13 20:55:02 crc kubenswrapper[5029]: E0313 20:55:02.625124 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="extract-utilities" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.625132 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="extract-utilities" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.625332 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9de11d-e210-4190-8104-e2861f46f832" containerName="registry-server" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.627008 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4sgr"] Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.627136 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.769456 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-utilities\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.769785 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-catalog-content\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.769837 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6thpw\" (UniqueName: \"kubernetes.io/projected/4c12df98-3559-4b7f-b550-5fedf056a774-kube-api-access-6thpw\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.872255 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-catalog-content\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.872662 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6thpw\" (UniqueName: \"kubernetes.io/projected/4c12df98-3559-4b7f-b550-5fedf056a774-kube-api-access-6thpw\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.872806 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-catalog-content\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.872902 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-utilities\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.873242 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-utilities\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.898893 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6thpw\" (UniqueName: \"kubernetes.io/projected/4c12df98-3559-4b7f-b550-5fedf056a774-kube-api-access-6thpw\") pod \"redhat-marketplace-r4sgr\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:02 crc kubenswrapper[5029]: I0313 20:55:02.958790 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:03 crc kubenswrapper[5029]: I0313 20:55:03.501833 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4sgr"] Mar 13 20:55:04 crc kubenswrapper[5029]: I0313 20:55:04.223336 5029 generic.go:334] "Generic (PLEG): container finished" podID="4c12df98-3559-4b7f-b550-5fedf056a774" containerID="02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1" exitCode=0 Mar 13 20:55:04 crc kubenswrapper[5029]: I0313 20:55:04.223401 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4sgr" event={"ID":"4c12df98-3559-4b7f-b550-5fedf056a774","Type":"ContainerDied","Data":"02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1"} Mar 13 20:55:04 crc kubenswrapper[5029]: I0313 20:55:04.224144 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4sgr" event={"ID":"4c12df98-3559-4b7f-b550-5fedf056a774","Type":"ContainerStarted","Data":"f606a2c179745d8b3fedde12783345886e7811324afe1bf6b6094dba19f73b32"} Mar 13 20:55:06 crc kubenswrapper[5029]: I0313 20:55:06.256026 5029 generic.go:334] "Generic (PLEG): container finished" podID="4c12df98-3559-4b7f-b550-5fedf056a774" containerID="312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20" exitCode=0 Mar 13 20:55:06 crc kubenswrapper[5029]: I0313 20:55:06.256136 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4sgr" event={"ID":"4c12df98-3559-4b7f-b550-5fedf056a774","Type":"ContainerDied","Data":"312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20"} Mar 13 20:55:07 crc kubenswrapper[5029]: I0313 20:55:07.269111 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4sgr" event={"ID":"4c12df98-3559-4b7f-b550-5fedf056a774","Type":"ContainerStarted","Data":"1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b"} Mar 13 20:55:07 crc kubenswrapper[5029]: I0313 20:55:07.300510 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r4sgr" podStartSLOduration=2.817123443 podStartE2EDuration="5.300484919s" podCreationTimestamp="2026-03-13 20:55:02 +0000 UTC" firstStartedPulling="2026-03-13 20:55:04.22677018 +0000 UTC m=+1664.242852583" lastFinishedPulling="2026-03-13 20:55:06.710131656 +0000 UTC m=+1666.726214059" observedRunningTime="2026-03-13 20:55:07.291570316 +0000 UTC m=+1667.307652739" watchObservedRunningTime="2026-03-13 20:55:07.300484919 +0000 UTC m=+1667.316567312" Mar 13 20:55:08 crc kubenswrapper[5029]: I0313 20:55:08.823462 5029 scope.go:117] "RemoveContainer" containerID="e00e66fdc5dccc1f3ccad323476bb4612941de8aa1e1944aa48da880b61c8d4a" Mar 13 20:55:08 crc kubenswrapper[5029]: I0313 20:55:08.873145 5029 scope.go:117] "RemoveContainer" containerID="ce4d5f28881921a2b6665cdede7f8a30d9b32d41ef9ab8c3708eadcc24b06e0c" Mar 13 20:55:12 crc kubenswrapper[5029]: I0313 20:55:12.959105 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:12 crc kubenswrapper[5029]: I0313 20:55:12.960075 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:13 crc kubenswrapper[5029]: I0313 20:55:13.013493 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:13 crc kubenswrapper[5029]: I0313 20:55:13.403540 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:13 crc kubenswrapper[5029]: I0313 20:55:13.464910 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4sgr"] Mar 13 20:55:15 crc kubenswrapper[5029]: I0313 20:55:15.357651 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r4sgr" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" containerName="registry-server" containerID="cri-o://1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b" gracePeriod=2 Mar 13 20:55:15 crc kubenswrapper[5029]: I0313 20:55:15.997115 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.165552 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6thpw\" (UniqueName: \"kubernetes.io/projected/4c12df98-3559-4b7f-b550-5fedf056a774-kube-api-access-6thpw\") pod \"4c12df98-3559-4b7f-b550-5fedf056a774\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.166333 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-utilities\") pod \"4c12df98-3559-4b7f-b550-5fedf056a774\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.166607 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-catalog-content\") pod \"4c12df98-3559-4b7f-b550-5fedf056a774\" (UID: \"4c12df98-3559-4b7f-b550-5fedf056a774\") " Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.167617 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-utilities" (OuterVolumeSpecName: "utilities") pod "4c12df98-3559-4b7f-b550-5fedf056a774" (UID: "4c12df98-3559-4b7f-b550-5fedf056a774"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.173454 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c12df98-3559-4b7f-b550-5fedf056a774-kube-api-access-6thpw" (OuterVolumeSpecName: "kube-api-access-6thpw") pod "4c12df98-3559-4b7f-b550-5fedf056a774" (UID: "4c12df98-3559-4b7f-b550-5fedf056a774"). InnerVolumeSpecName "kube-api-access-6thpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.241709 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c12df98-3559-4b7f-b550-5fedf056a774" (UID: "4c12df98-3559-4b7f-b550-5fedf056a774"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.269925 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6thpw\" (UniqueName: \"kubernetes.io/projected/4c12df98-3559-4b7f-b550-5fedf056a774-kube-api-access-6thpw\") on node \"crc\" DevicePath \"\"" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.269976 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.269990 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c12df98-3559-4b7f-b550-5fedf056a774-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.376234 5029 generic.go:334] "Generic (PLEG): container finished" podID="4c12df98-3559-4b7f-b550-5fedf056a774" containerID="1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b" exitCode=0 Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.376292 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4sgr" event={"ID":"4c12df98-3559-4b7f-b550-5fedf056a774","Type":"ContainerDied","Data":"1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b"} Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.376335 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4sgr" event={"ID":"4c12df98-3559-4b7f-b550-5fedf056a774","Type":"ContainerDied","Data":"f606a2c179745d8b3fedde12783345886e7811324afe1bf6b6094dba19f73b32"} Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.376362 5029 scope.go:117] "RemoveContainer" containerID="1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.376368 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4sgr" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.437338 5029 scope.go:117] "RemoveContainer" containerID="312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.457254 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4sgr"] Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.485160 5029 scope.go:117] "RemoveContainer" containerID="02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.489719 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4sgr"] Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.533070 5029 scope.go:117] "RemoveContainer" containerID="1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b" Mar 13 20:55:16 crc kubenswrapper[5029]: E0313 20:55:16.534040 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b\": container with ID starting with 1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b not found: ID does not exist" containerID="1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.534089 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b"} err="failed to get container status \"1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b\": rpc error: code = NotFound desc = could not find container \"1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b\": container with ID starting with 1a906130812d257c39aa3bd46bd3013a5a2473912c599e83a126c1a3aaaf075b not found: ID does not exist" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.534115 5029 scope.go:117] "RemoveContainer" containerID="312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20" Mar 13 20:55:16 crc kubenswrapper[5029]: E0313 20:55:16.534592 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20\": container with ID starting with 312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20 not found: ID does not exist" containerID="312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.534626 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20"} err="failed to get container status \"312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20\": rpc error: code = NotFound desc = could not find container \"312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20\": container with ID starting with 312dc9dc409af6ea397cd802d755ccb6db9380709cadb11633d2978bf75d9f20 not found: ID does not exist" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.534644 5029 scope.go:117] "RemoveContainer" containerID="02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1" Mar 13 20:55:16 crc kubenswrapper[5029]: E0313 20:55:16.535074 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1\": container with ID starting with 02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1 not found: ID does not exist" containerID="02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.535116 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1"} err="failed to get container status \"02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1\": rpc error: code = NotFound desc = could not find container \"02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1\": container with ID starting with 02dab141eab243a8bcc5fe6eae124d206a2e93c4e056fa8d3a1340749ec9cfd1 not found: ID does not exist" Mar 13 20:55:16 crc kubenswrapper[5029]: I0313 20:55:16.613145 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" path="/var/lib/kubelet/pods/4c12df98-3559-4b7f-b550-5fedf056a774/volumes" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.146102 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557256-848ts"] Mar 13 20:56:00 crc kubenswrapper[5029]: E0313 20:56:00.147333 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" containerName="registry-server" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.147349 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" containerName="registry-server" Mar 13 20:56:00 crc kubenswrapper[5029]: E0313 20:56:00.147367 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" containerName="extract-utilities" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.147373 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" containerName="extract-utilities" Mar 13 20:56:00 crc kubenswrapper[5029]: E0313 20:56:00.147385 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" containerName="extract-content" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.147393 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" containerName="extract-content" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.147599 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c12df98-3559-4b7f-b550-5fedf056a774" containerName="registry-server" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.148413 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557256-848ts" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.153752 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.153810 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.155241 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.156172 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557256-848ts"] Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.221923 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6rrh\" (UniqueName: \"kubernetes.io/projected/4c6e3f8b-d4fc-495b-b76d-a97dc036b482-kube-api-access-j6rrh\") pod \"auto-csr-approver-29557256-848ts\" (UID: \"4c6e3f8b-d4fc-495b-b76d-a97dc036b482\") " pod="openshift-infra/auto-csr-approver-29557256-848ts" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.324251 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6rrh\" (UniqueName: \"kubernetes.io/projected/4c6e3f8b-d4fc-495b-b76d-a97dc036b482-kube-api-access-j6rrh\") pod \"auto-csr-approver-29557256-848ts\" (UID: \"4c6e3f8b-d4fc-495b-b76d-a97dc036b482\") " pod="openshift-infra/auto-csr-approver-29557256-848ts" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.344432 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6rrh\" (UniqueName: \"kubernetes.io/projected/4c6e3f8b-d4fc-495b-b76d-a97dc036b482-kube-api-access-j6rrh\") pod \"auto-csr-approver-29557256-848ts\" (UID: \"4c6e3f8b-d4fc-495b-b76d-a97dc036b482\") " pod="openshift-infra/auto-csr-approver-29557256-848ts" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.470916 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557256-848ts" Mar 13 20:56:00 crc kubenswrapper[5029]: I0313 20:56:00.976720 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557256-848ts"] Mar 13 20:56:01 crc kubenswrapper[5029]: I0313 20:56:01.842525 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557256-848ts" event={"ID":"4c6e3f8b-d4fc-495b-b76d-a97dc036b482","Type":"ContainerStarted","Data":"905a3ad63516f932d9720b707a7a051502ee254808c3421eb0baa316ea2fabcc"} Mar 13 20:56:01 crc kubenswrapper[5029]: I0313 20:56:01.950477 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:56:01 crc kubenswrapper[5029]: I0313 20:56:01.950600 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:56:02 crc kubenswrapper[5029]: I0313 20:56:02.855182 5029 generic.go:334] "Generic (PLEG): container finished" podID="4c6e3f8b-d4fc-495b-b76d-a97dc036b482" containerID="bdd7ae829b6bf11f0af1e9614cb5e88867ce06c334b9202daaa081992a80bdea" exitCode=0 Mar 13 20:56:02 crc kubenswrapper[5029]: I0313 20:56:02.855306 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557256-848ts" event={"ID":"4c6e3f8b-d4fc-495b-b76d-a97dc036b482","Type":"ContainerDied","Data":"bdd7ae829b6bf11f0af1e9614cb5e88867ce06c334b9202daaa081992a80bdea"} Mar 13 20:56:04 crc kubenswrapper[5029]: I0313 20:56:04.190963 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557256-848ts" Mar 13 20:56:04 crc kubenswrapper[5029]: I0313 20:56:04.306511 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6rrh\" (UniqueName: \"kubernetes.io/projected/4c6e3f8b-d4fc-495b-b76d-a97dc036b482-kube-api-access-j6rrh\") pod \"4c6e3f8b-d4fc-495b-b76d-a97dc036b482\" (UID: \"4c6e3f8b-d4fc-495b-b76d-a97dc036b482\") " Mar 13 20:56:04 crc kubenswrapper[5029]: I0313 20:56:04.313056 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6e3f8b-d4fc-495b-b76d-a97dc036b482-kube-api-access-j6rrh" (OuterVolumeSpecName: "kube-api-access-j6rrh") pod "4c6e3f8b-d4fc-495b-b76d-a97dc036b482" (UID: "4c6e3f8b-d4fc-495b-b76d-a97dc036b482"). InnerVolumeSpecName "kube-api-access-j6rrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:56:04 crc kubenswrapper[5029]: I0313 20:56:04.408842 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6rrh\" (UniqueName: \"kubernetes.io/projected/4c6e3f8b-d4fc-495b-b76d-a97dc036b482-kube-api-access-j6rrh\") on node \"crc\" DevicePath \"\"" Mar 13 20:56:04 crc kubenswrapper[5029]: I0313 20:56:04.876926 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557256-848ts" event={"ID":"4c6e3f8b-d4fc-495b-b76d-a97dc036b482","Type":"ContainerDied","Data":"905a3ad63516f932d9720b707a7a051502ee254808c3421eb0baa316ea2fabcc"} Mar 13 20:56:04 crc kubenswrapper[5029]: I0313 20:56:04.877287 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905a3ad63516f932d9720b707a7a051502ee254808c3421eb0baa316ea2fabcc" Mar 13 20:56:04 crc kubenswrapper[5029]: I0313 20:56:04.876998 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557256-848ts" Mar 13 20:56:05 crc kubenswrapper[5029]: I0313 20:56:05.261394 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557250-jwpms"] Mar 13 20:56:05 crc kubenswrapper[5029]: I0313 20:56:05.273055 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557250-jwpms"] Mar 13 20:56:06 crc kubenswrapper[5029]: I0313 20:56:06.613032 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="274b7405-641b-4d9c-90b6-7bc8d511d5ea" path="/var/lib/kubelet/pods/274b7405-641b-4d9c-90b6-7bc8d511d5ea/volumes" Mar 13 20:56:08 crc kubenswrapper[5029]: I0313 20:56:08.971792 5029 scope.go:117] "RemoveContainer" containerID="44ceb93dd233613450e5b2dac027163dd76b768e2be8839aaf172525a1547292" Mar 13 20:56:09 crc kubenswrapper[5029]: I0313 20:56:09.001459 5029 scope.go:117] "RemoveContainer" containerID="2756430389c114d884682574fa01ce0a3d40540564fd8713da5614cfc51abb29" Mar 13 20:56:09 crc kubenswrapper[5029]: I0313 20:56:09.025496 5029 scope.go:117] "RemoveContainer" containerID="fc64aef6dfbf66b739e8f304c9cfd646cfaa779da452629bad2eb084763b2b31" Mar 13 20:56:09 crc kubenswrapper[5029]: I0313 20:56:09.055145 5029 scope.go:117] "RemoveContainer" containerID="5b2d1f7d891c6b0bf77f8478c6879a938efd6f0883c65cf39020967e8fb32c79" Mar 13 20:56:09 crc kubenswrapper[5029]: I0313 20:56:09.098438 5029 scope.go:117] "RemoveContainer" containerID="509a4f50aba58146b47f7a6520124efbe21784b4cda8e178e183d3168e9b3af9" Mar 13 20:56:09 crc kubenswrapper[5029]: I0313 20:56:09.145972 5029 scope.go:117] "RemoveContainer" containerID="1918a216e74582b1cc35a8e106d5c3de78a51b7b18a5073a244da3af3cd5a518" Mar 13 20:56:09 crc kubenswrapper[5029]: I0313 20:56:09.168790 5029 scope.go:117] "RemoveContainer" containerID="cf7175d96d34dd5e8cafd59ef50ce0774de6072c72186454e3019d15a1943450" Mar 13 20:56:31 crc kubenswrapper[5029]: I0313 20:56:31.949908 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:56:31 crc kubenswrapper[5029]: I0313 20:56:31.950707 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:57:01 crc kubenswrapper[5029]: I0313 20:57:01.950367 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 20:57:01 crc kubenswrapper[5029]: I0313 20:57:01.951212 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 20:57:01 crc kubenswrapper[5029]: I0313 20:57:01.951279 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 20:57:01 crc kubenswrapper[5029]: I0313 20:57:01.952314 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 20:57:01 crc kubenswrapper[5029]: I0313 20:57:01.952381 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" gracePeriod=600 Mar 13 20:57:02 crc kubenswrapper[5029]: I0313 20:57:02.048692 5029 generic.go:334] "Generic (PLEG): container finished" podID="0536889c-718f-4c69-a5ca-7428e7c351db" containerID="9482e78d644cc230e2d27c2cace24c8fb889fc36435dac831e6ac53acd7a060e" exitCode=0 Mar 13 20:57:02 crc kubenswrapper[5029]: I0313 20:57:02.048755 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" event={"ID":"0536889c-718f-4c69-a5ca-7428e7c351db","Type":"ContainerDied","Data":"9482e78d644cc230e2d27c2cace24c8fb889fc36435dac831e6ac53acd7a060e"} Mar 13 20:57:02 crc kubenswrapper[5029]: E0313 20:57:02.086299 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:57:02 crc kubenswrapper[5029]: E0313 20:57:02.121926 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa028723_a519_4f82_860c_4c149f3a4e4a.slice/crio-conmon-6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa028723_a519_4f82_860c_4c149f3a4e4a.slice/crio-6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1.scope\": RecentStats: unable to find data in memory cache]" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.059940 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" exitCode=0 Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.060034 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1"} Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.060108 5029 scope.go:117] "RemoveContainer" containerID="42ae9c192c95047ca08bd80103ba761f255a1bb01b61e6cc285f78d6d6c0169b" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.060799 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:57:03 crc kubenswrapper[5029]: E0313 20:57:03.061343 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.515608 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.699275 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp5bd\" (UniqueName: \"kubernetes.io/projected/0536889c-718f-4c69-a5ca-7428e7c351db-kube-api-access-jp5bd\") pod \"0536889c-718f-4c69-a5ca-7428e7c351db\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.699926 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-bootstrap-combined-ca-bundle\") pod \"0536889c-718f-4c69-a5ca-7428e7c351db\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.700183 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-inventory\") pod \"0536889c-718f-4c69-a5ca-7428e7c351db\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.700593 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-ssh-key-openstack-edpm-ipam\") pod \"0536889c-718f-4c69-a5ca-7428e7c351db\" (UID: \"0536889c-718f-4c69-a5ca-7428e7c351db\") " Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.711041 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0536889c-718f-4c69-a5ca-7428e7c351db" (UID: "0536889c-718f-4c69-a5ca-7428e7c351db"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.712119 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0536889c-718f-4c69-a5ca-7428e7c351db-kube-api-access-jp5bd" (OuterVolumeSpecName: "kube-api-access-jp5bd") pod "0536889c-718f-4c69-a5ca-7428e7c351db" (UID: "0536889c-718f-4c69-a5ca-7428e7c351db"). InnerVolumeSpecName "kube-api-access-jp5bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.738352 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-inventory" (OuterVolumeSpecName: "inventory") pod "0536889c-718f-4c69-a5ca-7428e7c351db" (UID: "0536889c-718f-4c69-a5ca-7428e7c351db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.741674 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0536889c-718f-4c69-a5ca-7428e7c351db" (UID: "0536889c-718f-4c69-a5ca-7428e7c351db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.804614 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.805047 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp5bd\" (UniqueName: \"kubernetes.io/projected/0536889c-718f-4c69-a5ca-7428e7c351db-kube-api-access-jp5bd\") on node \"crc\" DevicePath \"\"" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.805069 5029 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 20:57:03 crc kubenswrapper[5029]: I0313 20:57:03.805084 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0536889c-718f-4c69-a5ca-7428e7c351db-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.070795 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" event={"ID":"0536889c-718f-4c69-a5ca-7428e7c351db","Type":"ContainerDied","Data":"b1e0b704d46b6c634d6ea8e8f13e94af60cf161df8a10bd64d58ecedb5dd5cc7"} Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.070842 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1e0b704d46b6c634d6ea8e8f13e94af60cf161df8a10bd64d58ecedb5dd5cc7" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.070813 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.161614 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h"] Mar 13 20:57:04 crc kubenswrapper[5029]: E0313 20:57:04.162085 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6e3f8b-d4fc-495b-b76d-a97dc036b482" containerName="oc" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.162108 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6e3f8b-d4fc-495b-b76d-a97dc036b482" containerName="oc" Mar 13 20:57:04 crc kubenswrapper[5029]: E0313 20:57:04.162135 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0536889c-718f-4c69-a5ca-7428e7c351db" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.162143 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="0536889c-718f-4c69-a5ca-7428e7c351db" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.162385 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="0536889c-718f-4c69-a5ca-7428e7c351db" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.162420 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6e3f8b-d4fc-495b-b76d-a97dc036b482" containerName="oc" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.163290 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.169431 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.169457 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.169551 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.169612 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.173983 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h"] Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.317450 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.318660 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.318810 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqcm6\" (UniqueName: \"kubernetes.io/projected/85414e93-71aa-49bf-b7dd-00b07149e16b-kube-api-access-lqcm6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.420391 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.420494 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.420563 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqcm6\" (UniqueName: \"kubernetes.io/projected/85414e93-71aa-49bf-b7dd-00b07149e16b-kube-api-access-lqcm6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.425060 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.425764 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.439908 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqcm6\" (UniqueName: \"kubernetes.io/projected/85414e93-71aa-49bf-b7dd-00b07149e16b-kube-api-access-lqcm6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:04 crc kubenswrapper[5029]: I0313 20:57:04.486712 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:57:05 crc kubenswrapper[5029]: I0313 20:57:05.068135 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h"] Mar 13 20:57:05 crc kubenswrapper[5029]: I0313 20:57:05.092962 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" event={"ID":"85414e93-71aa-49bf-b7dd-00b07149e16b","Type":"ContainerStarted","Data":"dea7f51fa11dcdf903a2c14a3251c11be76d360627a671a3b7a0e55a000d745f"} Mar 13 20:57:06 crc kubenswrapper[5029]: I0313 20:57:06.107383 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" event={"ID":"85414e93-71aa-49bf-b7dd-00b07149e16b","Type":"ContainerStarted","Data":"4b16f0894567f8ba17fbf3ce32513daccc6be4452f5ef630533e66f918bad2c6"} Mar 13 20:57:06 crc kubenswrapper[5029]: I0313 20:57:06.126933 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" podStartSLOduration=1.709916206 podStartE2EDuration="2.126896286s" podCreationTimestamp="2026-03-13 20:57:04 +0000 UTC" firstStartedPulling="2026-03-13 20:57:05.073713477 +0000 UTC m=+1785.089795880" lastFinishedPulling="2026-03-13 20:57:05.490693567 +0000 UTC m=+1785.506775960" observedRunningTime="2026-03-13 20:57:06.126839924 +0000 UTC m=+1786.142922327" watchObservedRunningTime="2026-03-13 20:57:06.126896286 +0000 UTC m=+1786.142978689" Mar 13 20:57:09 crc kubenswrapper[5029]: I0313 20:57:09.295250 5029 scope.go:117] "RemoveContainer" containerID="45b72e2b0db3c5b81f5c7582f6e0ea7b35e46f79855b2ad9bfba82d06c890d63" Mar 13 20:57:09 crc kubenswrapper[5029]: I0313 20:57:09.328518 5029 scope.go:117] "RemoveContainer" containerID="cf84df7733c24482cd3931d0f5871c77b07206f191412540c9fedfeeb421f913" Mar 13 20:57:09 crc kubenswrapper[5029]: I0313 20:57:09.360230 5029 scope.go:117] "RemoveContainer" containerID="924fff1b8ed5903434680277adb181b7483b716961296b65e37def4eb3e1ab15" Mar 13 20:57:09 crc kubenswrapper[5029]: I0313 20:57:09.387095 5029 scope.go:117] "RemoveContainer" containerID="7501ce3787ad259910a4dd5f014c8bfed9a1b2645235474f868461e4e63ef402" Mar 13 20:57:16 crc kubenswrapper[5029]: I0313 20:57:16.599658 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:57:16 crc kubenswrapper[5029]: E0313 20:57:16.600626 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:57:28 crc kubenswrapper[5029]: I0313 20:57:28.599706 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:57:28 crc kubenswrapper[5029]: E0313 20:57:28.600541 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:57:34 crc kubenswrapper[5029]: I0313 20:57:34.055200 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a338-account-create-update-bfp8h"] Mar 13 20:57:34 crc kubenswrapper[5029]: I0313 20:57:34.066211 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-wwvxt"] Mar 13 20:57:34 crc kubenswrapper[5029]: I0313 20:57:34.076597 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-wwvxt"] Mar 13 20:57:34 crc kubenswrapper[5029]: I0313 20:57:34.088057 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a338-account-create-update-bfp8h"] Mar 13 20:57:34 crc kubenswrapper[5029]: I0313 20:57:34.614638 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55bdc521-fd20-4ff3-8561-715dd41e604f" path="/var/lib/kubelet/pods/55bdc521-fd20-4ff3-8561-715dd41e604f/volumes" Mar 13 20:57:34 crc kubenswrapper[5029]: I0313 20:57:34.616016 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb427263-6866-4a0a-ab33-e69f6890b52a" path="/var/lib/kubelet/pods/fb427263-6866-4a0a-ab33-e69f6890b52a/volumes" Mar 13 20:57:35 crc kubenswrapper[5029]: I0313 20:57:35.040580 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rgf5k"] Mar 13 20:57:35 crc kubenswrapper[5029]: I0313 20:57:35.052770 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rgf5k"] Mar 13 20:57:35 crc kubenswrapper[5029]: I0313 20:57:35.062235 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8fe-account-create-update-srjsm"] Mar 13 20:57:35 crc kubenswrapper[5029]: I0313 20:57:35.073896 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c8fe-account-create-update-srjsm"] Mar 13 20:57:36 crc kubenswrapper[5029]: I0313 20:57:36.045787 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pswbz"] Mar 13 20:57:36 crc kubenswrapper[5029]: I0313 20:57:36.061059 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1a94-account-create-update-phrtc"] Mar 13 20:57:36 crc kubenswrapper[5029]: I0313 20:57:36.072957 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pswbz"] Mar 13 20:57:36 crc kubenswrapper[5029]: I0313 20:57:36.088260 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1a94-account-create-update-phrtc"] Mar 13 20:57:36 crc kubenswrapper[5029]: I0313 20:57:36.616660 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5e0db0-ec13-4d33-9c31-311982a5d598" path="/var/lib/kubelet/pods/1e5e0db0-ec13-4d33-9c31-311982a5d598/volumes" Mar 13 20:57:36 crc kubenswrapper[5029]: I0313 20:57:36.622175 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205e0049-29c0-4ebc-8cb3-670e58c8af28" path="/var/lib/kubelet/pods/205e0049-29c0-4ebc-8cb3-670e58c8af28/volumes" Mar 13 20:57:36 crc kubenswrapper[5029]: I0313 20:57:36.625532 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e35749-84ef-4c66-ba93-835828ffcbda" path="/var/lib/kubelet/pods/81e35749-84ef-4c66-ba93-835828ffcbda/volumes" Mar 13 20:57:36 crc kubenswrapper[5029]: I0313 20:57:36.628086 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a13c94-9043-44b0-a90a-0f6b60863453" path="/var/lib/kubelet/pods/d6a13c94-9043-44b0-a90a-0f6b60863453/volumes" Mar 13 20:57:43 crc kubenswrapper[5029]: I0313 20:57:43.600897 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:57:43 crc kubenswrapper[5029]: E0313 20:57:43.602444 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:57:56 crc kubenswrapper[5029]: I0313 20:57:56.048845 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h227t"] Mar 13 20:57:56 crc kubenswrapper[5029]: I0313 20:57:56.059745 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h227t"] Mar 13 20:57:56 crc kubenswrapper[5029]: I0313 20:57:56.599341 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:57:56 crc kubenswrapper[5029]: E0313 20:57:56.599795 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:57:56 crc kubenswrapper[5029]: I0313 20:57:56.625470 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cceec7e-5b5a-45b4-8480-9f44ce88107a" path="/var/lib/kubelet/pods/9cceec7e-5b5a-45b4-8480-9f44ce88107a/volumes" Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.151096 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557258-454kv"] Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.153821 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557258-454kv" Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.163538 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557258-454kv"] Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.176996 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.177476 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.177795 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.240175 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tcg\" (UniqueName: \"kubernetes.io/projected/841dfc0b-34fc-46ec-bf2a-f9578c341e92-kube-api-access-c2tcg\") pod \"auto-csr-approver-29557258-454kv\" (UID: \"841dfc0b-34fc-46ec-bf2a-f9578c341e92\") " pod="openshift-infra/auto-csr-approver-29557258-454kv" Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.343397 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tcg\" (UniqueName: \"kubernetes.io/projected/841dfc0b-34fc-46ec-bf2a-f9578c341e92-kube-api-access-c2tcg\") pod \"auto-csr-approver-29557258-454kv\" (UID: \"841dfc0b-34fc-46ec-bf2a-f9578c341e92\") " pod="openshift-infra/auto-csr-approver-29557258-454kv" Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.365167 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tcg\" (UniqueName: \"kubernetes.io/projected/841dfc0b-34fc-46ec-bf2a-f9578c341e92-kube-api-access-c2tcg\") pod \"auto-csr-approver-29557258-454kv\" (UID: \"841dfc0b-34fc-46ec-bf2a-f9578c341e92\") " pod="openshift-infra/auto-csr-approver-29557258-454kv" Mar 13 20:58:00 crc kubenswrapper[5029]: I0313 20:58:00.504659 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557258-454kv" Mar 13 20:58:01 crc kubenswrapper[5029]: I0313 20:58:01.061474 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557258-454kv"] Mar 13 20:58:01 crc kubenswrapper[5029]: I0313 20:58:01.719026 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557258-454kv" event={"ID":"841dfc0b-34fc-46ec-bf2a-f9578c341e92","Type":"ContainerStarted","Data":"a231128896de4c5058967a5239972f7c4fa035e7aa266b7008b3824bcf647cab"} Mar 13 20:58:02 crc kubenswrapper[5029]: I0313 20:58:02.730478 5029 generic.go:334] "Generic (PLEG): container finished" podID="841dfc0b-34fc-46ec-bf2a-f9578c341e92" containerID="bfc158d8f0cf1ce7c33de51ee6e31b7a7fd29cbe5a31ba45e663fbaf00551664" exitCode=0 Mar 13 20:58:02 crc kubenswrapper[5029]: I0313 20:58:02.730542 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557258-454kv" event={"ID":"841dfc0b-34fc-46ec-bf2a-f9578c341e92","Type":"ContainerDied","Data":"bfc158d8f0cf1ce7c33de51ee6e31b7a7fd29cbe5a31ba45e663fbaf00551664"} Mar 13 20:58:04 crc kubenswrapper[5029]: I0313 20:58:04.089042 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557258-454kv" Mar 13 20:58:04 crc kubenswrapper[5029]: I0313 20:58:04.270679 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2tcg\" (UniqueName: \"kubernetes.io/projected/841dfc0b-34fc-46ec-bf2a-f9578c341e92-kube-api-access-c2tcg\") pod \"841dfc0b-34fc-46ec-bf2a-f9578c341e92\" (UID: \"841dfc0b-34fc-46ec-bf2a-f9578c341e92\") " Mar 13 20:58:04 crc kubenswrapper[5029]: I0313 20:58:04.276326 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/841dfc0b-34fc-46ec-bf2a-f9578c341e92-kube-api-access-c2tcg" (OuterVolumeSpecName: "kube-api-access-c2tcg") pod "841dfc0b-34fc-46ec-bf2a-f9578c341e92" (UID: "841dfc0b-34fc-46ec-bf2a-f9578c341e92"). InnerVolumeSpecName "kube-api-access-c2tcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:58:04 crc kubenswrapper[5029]: I0313 20:58:04.374330 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2tcg\" (UniqueName: \"kubernetes.io/projected/841dfc0b-34fc-46ec-bf2a-f9578c341e92-kube-api-access-c2tcg\") on node \"crc\" DevicePath \"\"" Mar 13 20:58:04 crc kubenswrapper[5029]: I0313 20:58:04.753845 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557258-454kv" event={"ID":"841dfc0b-34fc-46ec-bf2a-f9578c341e92","Type":"ContainerDied","Data":"a231128896de4c5058967a5239972f7c4fa035e7aa266b7008b3824bcf647cab"} Mar 13 20:58:04 crc kubenswrapper[5029]: I0313 20:58:04.754274 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a231128896de4c5058967a5239972f7c4fa035e7aa266b7008b3824bcf647cab" Mar 13 20:58:04 crc kubenswrapper[5029]: I0313 20:58:04.754005 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557258-454kv" Mar 13 20:58:05 crc kubenswrapper[5029]: I0313 20:58:05.176464 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557252-qnqbb"] Mar 13 20:58:05 crc kubenswrapper[5029]: I0313 20:58:05.187410 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557252-qnqbb"] Mar 13 20:58:06 crc kubenswrapper[5029]: I0313 20:58:06.614182 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793cd1b3-1bef-48e6-8a58-1a475d06d99f" path="/var/lib/kubelet/pods/793cd1b3-1bef-48e6-8a58-1a475d06d99f/volumes" Mar 13 20:58:07 crc kubenswrapper[5029]: I0313 20:58:07.039840 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-d4mwd"] Mar 13 20:58:07 crc kubenswrapper[5029]: I0313 20:58:07.051769 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-d4mwd"] Mar 13 20:58:08 crc kubenswrapper[5029]: I0313 20:58:08.600111 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:58:08 crc kubenswrapper[5029]: E0313 20:58:08.600756 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:58:08 crc kubenswrapper[5029]: I0313 20:58:08.613053 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fdc768-348b-4581-a918-a009351efeee" path="/var/lib/kubelet/pods/f3fdc768-348b-4581-a918-a009351efeee/volumes" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.440570 5029 scope.go:117] "RemoveContainer" containerID="941abca712d6cc10ba4fd44442e3d4a2baae7fd375af7dfdf49a9d79ff55e945" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.475068 5029 scope.go:117] "RemoveContainer" containerID="3bc964d09ae70bbe583878d31a8e5fcdf1551597251dc04c22a2c5259743d3d7" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.530406 5029 scope.go:117] "RemoveContainer" containerID="b3f3f36afde5bad99e1d0336c5a5a61b68adb19e8da6edbf7bc9b5768f51cd18" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.582321 5029 scope.go:117] "RemoveContainer" containerID="932bc1a58c685d23a716f969afd784d9f6ab1d3ec96fb391cd6d1c679f9c869f" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.628528 5029 scope.go:117] "RemoveContainer" containerID="b99729606e95ff8ba7c3943c0cd738121474ba8d694f5297382de70bb267ac8c" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.676502 5029 scope.go:117] "RemoveContainer" containerID="c94405627ad1a0d0ab5d8cc6cf9533c40f3fcbb38dbe3239d2e6e0553c302c7a" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.710674 5029 scope.go:117] "RemoveContainer" containerID="4ecd809812489fa6d2ba9e0153821a7f4e50c471fd0246c1a65b59775012418c" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.756354 5029 scope.go:117] "RemoveContainer" containerID="23f50f0ba8a8e0fafb1d784f3c3ce17bbf23f89ed3df191fc166a0b3978d35ea" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.776676 5029 scope.go:117] "RemoveContainer" containerID="e854972751227e9daeb84215df7b13d62235c8bb747460fc4e9973d069bd1198" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.818814 5029 scope.go:117] "RemoveContainer" containerID="786f5c8c418c19e76dcba4bcebdbe3a09ff7c6bd765b3371107a549006a33c79" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.845066 5029 scope.go:117] "RemoveContainer" containerID="2113b4c7052640b04abaeac4661ed3b21b11aa11f8963cd632dfbbc21d79a667" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.880106 5029 scope.go:117] "RemoveContainer" containerID="89b9d55cce5567356ede20aae3526de586639b64acbb27fa74f8c5bc4ccf3f6e" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.908082 5029 scope.go:117] "RemoveContainer" containerID="8af2760983564bb233cc0b2c486069de451daf176e96926ea1b8c6f3542c70e6" Mar 13 20:58:09 crc kubenswrapper[5029]: I0313 20:58:09.958941 5029 scope.go:117] "RemoveContainer" containerID="fbc80d437010d0e8e16130343bf792279e595d1ac96f24d57d23a3e5ca38dd7d" Mar 13 20:58:14 crc kubenswrapper[5029]: I0313 20:58:14.045074 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-fjnrj"] Mar 13 20:58:14 crc kubenswrapper[5029]: I0313 20:58:14.060215 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-fjnrj"] Mar 13 20:58:14 crc kubenswrapper[5029]: I0313 20:58:14.620634 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba61da6-5905-40c4-bdf7-dcd9b5e622f1" path="/var/lib/kubelet/pods/eba61da6-5905-40c4-bdf7-dcd9b5e622f1/volumes" Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.059194 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-3128-account-create-update-4n9hw"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.071774 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dlcdx"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.084177 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ghqf7"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.096342 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dlcdx"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.105460 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ghqf7"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.113917 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-3128-account-create-update-4n9hw"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.122087 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b0fa-account-create-update-jg4lr"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.132691 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-strtq"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.141163 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-strtq"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.148869 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7762-account-create-update-xzjrj"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.158632 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6239-account-create-update-sx4pg"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.167885 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b0fa-account-create-update-jg4lr"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.178111 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6239-account-create-update-sx4pg"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.186379 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7762-account-create-update-xzjrj"] Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.612044 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da3ca91-2523-4bbc-9ee8-5957e040e522" path="/var/lib/kubelet/pods/1da3ca91-2523-4bbc-9ee8-5957e040e522/volumes" Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.612711 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4584e142-8670-4bae-a757-fcbe7cb3e614" path="/var/lib/kubelet/pods/4584e142-8670-4bae-a757-fcbe7cb3e614/volumes" Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.613342 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc0a76d-907f-4dd2-99be-8dcde78b34b6" path="/var/lib/kubelet/pods/6cc0a76d-907f-4dd2-99be-8dcde78b34b6/volumes" Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.613902 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1902f70-49a1-454a-8f7c-e90e2aa9c8ea" path="/var/lib/kubelet/pods/e1902f70-49a1-454a-8f7c-e90e2aa9c8ea/volumes" Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.615032 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22e682c-f9d6-4ef0-a8ad-b87aea2ef852" path="/var/lib/kubelet/pods/e22e682c-f9d6-4ef0-a8ad-b87aea2ef852/volumes" Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.615557 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70a7564-5a51-4289-8f7f-22c3258a649a" path="/var/lib/kubelet/pods/e70a7564-5a51-4289-8f7f-22c3258a649a/volumes" Mar 13 20:58:18 crc kubenswrapper[5029]: I0313 20:58:18.616155 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9745dc7-7db8-47fd-9e70-4e88a4652c52" path="/var/lib/kubelet/pods/e9745dc7-7db8-47fd-9e70-4e88a4652c52/volumes" Mar 13 20:58:23 crc kubenswrapper[5029]: I0313 20:58:23.600396 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:58:23 crc kubenswrapper[5029]: E0313 20:58:23.601600 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:58:24 crc kubenswrapper[5029]: I0313 20:58:24.039023 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-44kzh"] Mar 13 20:58:24 crc kubenswrapper[5029]: I0313 20:58:24.053839 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-44kzh"] Mar 13 20:58:24 crc kubenswrapper[5029]: I0313 20:58:24.615189 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73fbf5bd-1541-450a-be13-daf65ce110ac" path="/var/lib/kubelet/pods/73fbf5bd-1541-450a-be13-daf65ce110ac/volumes" Mar 13 20:58:34 crc kubenswrapper[5029]: I0313 20:58:34.599149 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:58:34 crc kubenswrapper[5029]: E0313 20:58:34.600261 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:58:48 crc kubenswrapper[5029]: I0313 20:58:48.209763 5029 generic.go:334] "Generic (PLEG): container finished" podID="85414e93-71aa-49bf-b7dd-00b07149e16b" containerID="4b16f0894567f8ba17fbf3ce32513daccc6be4452f5ef630533e66f918bad2c6" exitCode=0 Mar 13 20:58:48 crc kubenswrapper[5029]: I0313 20:58:48.210149 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" event={"ID":"85414e93-71aa-49bf-b7dd-00b07149e16b","Type":"ContainerDied","Data":"4b16f0894567f8ba17fbf3ce32513daccc6be4452f5ef630533e66f918bad2c6"} Mar 13 20:58:48 crc kubenswrapper[5029]: I0313 20:58:48.600070 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:58:48 crc kubenswrapper[5029]: E0313 20:58:48.600524 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.700713 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.823820 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqcm6\" (UniqueName: \"kubernetes.io/projected/85414e93-71aa-49bf-b7dd-00b07149e16b-kube-api-access-lqcm6\") pod \"85414e93-71aa-49bf-b7dd-00b07149e16b\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.824027 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-ssh-key-openstack-edpm-ipam\") pod \"85414e93-71aa-49bf-b7dd-00b07149e16b\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.824150 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-inventory\") pod \"85414e93-71aa-49bf-b7dd-00b07149e16b\" (UID: \"85414e93-71aa-49bf-b7dd-00b07149e16b\") " Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.839096 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85414e93-71aa-49bf-b7dd-00b07149e16b-kube-api-access-lqcm6" (OuterVolumeSpecName: "kube-api-access-lqcm6") pod "85414e93-71aa-49bf-b7dd-00b07149e16b" (UID: "85414e93-71aa-49bf-b7dd-00b07149e16b"). InnerVolumeSpecName "kube-api-access-lqcm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.868476 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-inventory" (OuterVolumeSpecName: "inventory") pod "85414e93-71aa-49bf-b7dd-00b07149e16b" (UID: "85414e93-71aa-49bf-b7dd-00b07149e16b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.868591 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85414e93-71aa-49bf-b7dd-00b07149e16b" (UID: "85414e93-71aa-49bf-b7dd-00b07149e16b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.926886 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqcm6\" (UniqueName: \"kubernetes.io/projected/85414e93-71aa-49bf-b7dd-00b07149e16b-kube-api-access-lqcm6\") on node \"crc\" DevicePath \"\"" Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.926938 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 20:58:49 crc kubenswrapper[5029]: I0313 20:58:49.926952 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85414e93-71aa-49bf-b7dd-00b07149e16b-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.236797 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" event={"ID":"85414e93-71aa-49bf-b7dd-00b07149e16b","Type":"ContainerDied","Data":"dea7f51fa11dcdf903a2c14a3251c11be76d360627a671a3b7a0e55a000d745f"} Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.236927 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea7f51fa11dcdf903a2c14a3251c11be76d360627a671a3b7a0e55a000d745f" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.236973 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.330536 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8"] Mar 13 20:58:50 crc kubenswrapper[5029]: E0313 20:58:50.330990 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85414e93-71aa-49bf-b7dd-00b07149e16b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.331011 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="85414e93-71aa-49bf-b7dd-00b07149e16b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 13 20:58:50 crc kubenswrapper[5029]: E0313 20:58:50.331041 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841dfc0b-34fc-46ec-bf2a-f9578c341e92" containerName="oc" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.331047 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="841dfc0b-34fc-46ec-bf2a-f9578c341e92" containerName="oc" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.331228 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="841dfc0b-34fc-46ec-bf2a-f9578c341e92" containerName="oc" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.331259 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="85414e93-71aa-49bf-b7dd-00b07149e16b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.331929 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.337818 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.337975 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.338286 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.342624 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.348085 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8"] Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.369092 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.369173 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dtb\" (UniqueName: \"kubernetes.io/projected/6566347c-a319-4ac9-a859-8cff6b7f47c0-kube-api-access-x6dtb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.369243 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.471569 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.471639 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dtb\" (UniqueName: \"kubernetes.io/projected/6566347c-a319-4ac9-a859-8cff6b7f47c0-kube-api-access-x6dtb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.471703 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.475774 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.475886 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.493718 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dtb\" (UniqueName: \"kubernetes.io/projected/6566347c-a319-4ac9-a859-8cff6b7f47c0-kube-api-access-x6dtb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:50 crc kubenswrapper[5029]: I0313 20:58:50.668706 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 20:58:51 crc kubenswrapper[5029]: I0313 20:58:51.202164 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8"] Mar 13 20:58:51 crc kubenswrapper[5029]: I0313 20:58:51.207015 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 20:58:51 crc kubenswrapper[5029]: I0313 20:58:51.253691 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" event={"ID":"6566347c-a319-4ac9-a859-8cff6b7f47c0","Type":"ContainerStarted","Data":"af93e64abf5489da059433f002ddb498bac8a85790ef645379b3fdf66f2be1dc"} Mar 13 20:58:52 crc kubenswrapper[5029]: I0313 20:58:52.265357 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" event={"ID":"6566347c-a319-4ac9-a859-8cff6b7f47c0","Type":"ContainerStarted","Data":"a8bbe3ef13cf4ca18d9d1bedbc1f813cde319c59464d19f9ad99e097eca1901e"} Mar 13 20:58:52 crc kubenswrapper[5029]: I0313 20:58:52.283457 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" podStartSLOduration=1.8271757499999999 podStartE2EDuration="2.283433631s" podCreationTimestamp="2026-03-13 20:58:50 +0000 UTC" firstStartedPulling="2026-03-13 20:58:51.206746192 +0000 UTC m=+1891.222828595" lastFinishedPulling="2026-03-13 20:58:51.663004083 +0000 UTC m=+1891.679086476" observedRunningTime="2026-03-13 20:58:52.283295977 +0000 UTC m=+1892.299378400" watchObservedRunningTime="2026-03-13 20:58:52.283433631 +0000 UTC m=+1892.299516034" Mar 13 20:58:58 crc kubenswrapper[5029]: I0313 20:58:58.048389 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qdq6p"] Mar 13 20:58:58 crc kubenswrapper[5029]: I0313 20:58:58.057552 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qdq6p"] Mar 13 20:58:58 crc kubenswrapper[5029]: I0313 20:58:58.612141 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd74a89-871d-499c-9362-d2ee8713147a" path="/var/lib/kubelet/pods/4cd74a89-871d-499c-9362-d2ee8713147a/volumes" Mar 13 20:59:03 crc kubenswrapper[5029]: I0313 20:59:03.600560 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:59:03 crc kubenswrapper[5029]: E0313 20:59:03.602265 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.053469 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qcjtl"] Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.063711 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qcjtl"] Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.265992 5029 scope.go:117] "RemoveContainer" containerID="8c4f7eb4e692d283364545720f6f4377dbeb4858ae2908160a897ee0cc59f690" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.293130 5029 scope.go:117] "RemoveContainer" containerID="2142b16c353fe61ad5f6e3ed36766a4bcd0cc2f5603879cba12d4d5612d8d264" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.342311 5029 scope.go:117] "RemoveContainer" containerID="f01a5be770269349a81fcfeef8e234871ddb85d02a87e24e8a345ef7d329a17b" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.397735 5029 scope.go:117] "RemoveContainer" containerID="cc61ac623e976451fdba84b055dbdfc2202ed91f2f114dd2ac339ec80e7dcc45" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.466469 5029 scope.go:117] "RemoveContainer" containerID="6aceb4f35c9aa0c2d2259e6a2de1e32d6ef1d23e6f0098d128d9ecd972bb6795" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.508759 5029 scope.go:117] "RemoveContainer" containerID="049aa96075ef0ac84de62704326a7851cea6d3373e67e731afff47bde62f8110" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.570384 5029 scope.go:117] "RemoveContainer" containerID="036c75a5a372348a1fb1e35e152bd9e5bda1fb5aae5f2275474af750989f85fb" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.611592 5029 scope.go:117] "RemoveContainer" containerID="228079e89dc1372cfa4435fbee00985016bdda5232b901070c0d1d2349f8af7e" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.618351 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c75c1c18-27e6-4fae-bf58-03387b32e4f3" path="/var/lib/kubelet/pods/c75c1c18-27e6-4fae-bf58-03387b32e4f3/volumes" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.658785 5029 scope.go:117] "RemoveContainer" containerID="186836aa2d462544ae027ce0afa042a2aa8a312169a1ad0ac4d9b916501037f0" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.679840 5029 scope.go:117] "RemoveContainer" containerID="9bf8aca3ecad38f35de125da898a0c0e6af8f9f4a3e0c8b68486ea77d62d5a98" Mar 13 20:59:10 crc kubenswrapper[5029]: I0313 20:59:10.715670 5029 scope.go:117] "RemoveContainer" containerID="531b68d8da82b1abbefcb1f1bb823b6517d9721fdfab8292ec36bb4778d381fa" Mar 13 20:59:13 crc kubenswrapper[5029]: I0313 20:59:13.045707 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xmjp6"] Mar 13 20:59:13 crc kubenswrapper[5029]: I0313 20:59:13.055761 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xmjp6"] Mar 13 20:59:14 crc kubenswrapper[5029]: I0313 20:59:14.615044 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c020ac40-202f-4f46-b658-f1cce4d0ad1d" path="/var/lib/kubelet/pods/c020ac40-202f-4f46-b658-f1cce4d0ad1d/volumes" Mar 13 20:59:15 crc kubenswrapper[5029]: I0313 20:59:15.600017 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:59:15 crc kubenswrapper[5029]: E0313 20:59:15.600668 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:59:23 crc kubenswrapper[5029]: I0313 20:59:23.038287 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h5hp5"] Mar 13 20:59:23 crc kubenswrapper[5029]: I0313 20:59:23.051822 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h5hp5"] Mar 13 20:59:24 crc kubenswrapper[5029]: I0313 20:59:24.614703 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5243e50-28ff-4f5c-aeb1-97a87b1f2765" path="/var/lib/kubelet/pods/a5243e50-28ff-4f5c-aeb1-97a87b1f2765/volumes" Mar 13 20:59:27 crc kubenswrapper[5029]: I0313 20:59:27.599327 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:59:27 crc kubenswrapper[5029]: E0313 20:59:27.600161 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:59:30 crc kubenswrapper[5029]: I0313 20:59:30.032760 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xhhzb"] Mar 13 20:59:30 crc kubenswrapper[5029]: I0313 20:59:30.043042 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xhhzb"] Mar 13 20:59:30 crc kubenswrapper[5029]: I0313 20:59:30.615112 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a13c03-b012-4416-bb5b-3ff21417290a" path="/var/lib/kubelet/pods/e5a13c03-b012-4416-bb5b-3ff21417290a/volumes" Mar 13 20:59:39 crc kubenswrapper[5029]: I0313 20:59:39.035046 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-76l7z"] Mar 13 20:59:39 crc kubenswrapper[5029]: I0313 20:59:39.044911 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-76l7z"] Mar 13 20:59:40 crc kubenswrapper[5029]: I0313 20:59:40.611110 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:59:40 crc kubenswrapper[5029]: E0313 20:59:40.611515 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:59:40 crc kubenswrapper[5029]: I0313 20:59:40.617936 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27175d1-38d4-4709-9d98-b71adc445f02" path="/var/lib/kubelet/pods/e27175d1-38d4-4709-9d98-b71adc445f02/volumes" Mar 13 20:59:51 crc kubenswrapper[5029]: I0313 20:59:51.600543 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 20:59:51 crc kubenswrapper[5029]: E0313 20:59:51.601358 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 20:59:58 crc kubenswrapper[5029]: I0313 20:59:58.958244 5029 generic.go:334] "Generic (PLEG): container finished" podID="6566347c-a319-4ac9-a859-8cff6b7f47c0" containerID="a8bbe3ef13cf4ca18d9d1bedbc1f813cde319c59464d19f9ad99e097eca1901e" exitCode=0 Mar 13 20:59:58 crc kubenswrapper[5029]: I0313 20:59:58.958497 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" event={"ID":"6566347c-a319-4ac9-a859-8cff6b7f47c0","Type":"ContainerDied","Data":"a8bbe3ef13cf4ca18d9d1bedbc1f813cde319c59464d19f9ad99e097eca1901e"} Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.148172 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557260-sbf99"] Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.149865 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557260-sbf99" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.151835 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.152009 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.156529 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.157125 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk"] Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.180813 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxggj\" (UniqueName: \"kubernetes.io/projected/cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a-kube-api-access-bxggj\") pod \"auto-csr-approver-29557260-sbf99\" (UID: \"cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a\") " pod="openshift-infra/auto-csr-approver-29557260-sbf99" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.192515 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk"] Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.192938 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.201145 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.201361 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.220418 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557260-sbf99"] Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.282824 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxggj\" (UniqueName: \"kubernetes.io/projected/cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a-kube-api-access-bxggj\") pod \"auto-csr-approver-29557260-sbf99\" (UID: \"cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a\") " pod="openshift-infra/auto-csr-approver-29557260-sbf99" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.303641 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxggj\" (UniqueName: \"kubernetes.io/projected/cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a-kube-api-access-bxggj\") pod \"auto-csr-approver-29557260-sbf99\" (UID: \"cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a\") " pod="openshift-infra/auto-csr-approver-29557260-sbf99" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.384803 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51932f39-1baa-4d43-98ee-a58dccb6251b-config-volume\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.385306 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51932f39-1baa-4d43-98ee-a58dccb6251b-secret-volume\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.385347 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drkmt\" (UniqueName: \"kubernetes.io/projected/51932f39-1baa-4d43-98ee-a58dccb6251b-kube-api-access-drkmt\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.487110 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51932f39-1baa-4d43-98ee-a58dccb6251b-secret-volume\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.487217 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drkmt\" (UniqueName: \"kubernetes.io/projected/51932f39-1baa-4d43-98ee-a58dccb6251b-kube-api-access-drkmt\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.487268 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51932f39-1baa-4d43-98ee-a58dccb6251b-config-volume\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.489003 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51932f39-1baa-4d43-98ee-a58dccb6251b-config-volume\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.493985 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51932f39-1baa-4d43-98ee-a58dccb6251b-secret-volume\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.495907 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557260-sbf99" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.508337 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drkmt\" (UniqueName: \"kubernetes.io/projected/51932f39-1baa-4d43-98ee-a58dccb6251b-kube-api-access-drkmt\") pod \"collect-profiles-29557260-8hvkk\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.524163 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.615879 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.795440 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-ssh-key-openstack-edpm-ipam\") pod \"6566347c-a319-4ac9-a859-8cff6b7f47c0\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.795509 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-inventory\") pod \"6566347c-a319-4ac9-a859-8cff6b7f47c0\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.795693 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6dtb\" (UniqueName: \"kubernetes.io/projected/6566347c-a319-4ac9-a859-8cff6b7f47c0-kube-api-access-x6dtb\") pod \"6566347c-a319-4ac9-a859-8cff6b7f47c0\" (UID: \"6566347c-a319-4ac9-a859-8cff6b7f47c0\") " Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.802023 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6566347c-a319-4ac9-a859-8cff6b7f47c0-kube-api-access-x6dtb" (OuterVolumeSpecName: "kube-api-access-x6dtb") pod "6566347c-a319-4ac9-a859-8cff6b7f47c0" (UID: "6566347c-a319-4ac9-a859-8cff6b7f47c0"). InnerVolumeSpecName "kube-api-access-x6dtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.827874 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-inventory" (OuterVolumeSpecName: "inventory") pod "6566347c-a319-4ac9-a859-8cff6b7f47c0" (UID: "6566347c-a319-4ac9-a859-8cff6b7f47c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.828937 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6566347c-a319-4ac9-a859-8cff6b7f47c0" (UID: "6566347c-a319-4ac9-a859-8cff6b7f47c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.899360 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.899421 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6566347c-a319-4ac9-a859-8cff6b7f47c0-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.899432 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6dtb\" (UniqueName: \"kubernetes.io/projected/6566347c-a319-4ac9-a859-8cff6b7f47c0-kube-api-access-x6dtb\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.991264 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.991872 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8" event={"ID":"6566347c-a319-4ac9-a859-8cff6b7f47c0","Type":"ContainerDied","Data":"af93e64abf5489da059433f002ddb498bac8a85790ef645379b3fdf66f2be1dc"} Mar 13 21:00:00 crc kubenswrapper[5029]: I0313 21:00:00.991919 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af93e64abf5489da059433f002ddb498bac8a85790ef645379b3fdf66f2be1dc" Mar 13 21:00:01 crc kubenswrapper[5029]: W0313 21:00:01.025223 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdde5ceb_44a0_4e21_b8e1_2dc5ee3fcc0a.slice/crio-36af3b458da1d4139108870aae99b912acfb62659e446ec898c0d78092881cd9 WatchSource:0}: Error finding container 36af3b458da1d4139108870aae99b912acfb62659e446ec898c0d78092881cd9: Status 404 returned error can't find the container with id 36af3b458da1d4139108870aae99b912acfb62659e446ec898c0d78092881cd9 Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.025241 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557260-sbf99"] Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.075183 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd"] Mar 13 21:00:01 crc kubenswrapper[5029]: E0313 21:00:01.076578 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6566347c-a319-4ac9-a859-8cff6b7f47c0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.076615 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="6566347c-a319-4ac9-a859-8cff6b7f47c0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.077021 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="6566347c-a319-4ac9-a859-8cff6b7f47c0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.078358 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.082690 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.083071 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.083407 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.083677 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.102023 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd"] Mar 13 21:00:01 crc kubenswrapper[5029]: W0313 21:00:01.116401 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51932f39_1baa_4d43_98ee_a58dccb6251b.slice/crio-c0242d5c871bee9aaf2b1a9fb02714156dd3e0d089d0306808980db90cea8c6e WatchSource:0}: Error finding container c0242d5c871bee9aaf2b1a9fb02714156dd3e0d089d0306808980db90cea8c6e: Status 404 returned error can't find the container with id c0242d5c871bee9aaf2b1a9fb02714156dd3e0d089d0306808980db90cea8c6e Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.121770 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk"] Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.206695 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtr2\" (UniqueName: \"kubernetes.io/projected/cafa7079-daee-42e6-818b-32277058379d-kube-api-access-jbtr2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.206813 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.207249 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.309879 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbtr2\" (UniqueName: \"kubernetes.io/projected/cafa7079-daee-42e6-818b-32277058379d-kube-api-access-jbtr2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.310039 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.310128 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.316819 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.316971 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.333040 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbtr2\" (UniqueName: \"kubernetes.io/projected/cafa7079-daee-42e6-818b-32277058379d-kube-api-access-jbtr2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.403738 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:01 crc kubenswrapper[5029]: I0313 21:00:01.983518 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd"] Mar 13 21:00:01 crc kubenswrapper[5029]: W0313 21:00:01.983632 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcafa7079_daee_42e6_818b_32277058379d.slice/crio-7e84d35aad67758bce9ef516d7e75e50730a8ee138aadfd863ba034cd0757362 WatchSource:0}: Error finding container 7e84d35aad67758bce9ef516d7e75e50730a8ee138aadfd863ba034cd0757362: Status 404 returned error can't find the container with id 7e84d35aad67758bce9ef516d7e75e50730a8ee138aadfd863ba034cd0757362 Mar 13 21:00:02 crc kubenswrapper[5029]: I0313 21:00:02.002106 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" event={"ID":"cafa7079-daee-42e6-818b-32277058379d","Type":"ContainerStarted","Data":"7e84d35aad67758bce9ef516d7e75e50730a8ee138aadfd863ba034cd0757362"} Mar 13 21:00:02 crc kubenswrapper[5029]: I0313 21:00:02.003633 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557260-sbf99" event={"ID":"cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a","Type":"ContainerStarted","Data":"36af3b458da1d4139108870aae99b912acfb62659e446ec898c0d78092881cd9"} Mar 13 21:00:02 crc kubenswrapper[5029]: I0313 21:00:02.005729 5029 generic.go:334] "Generic (PLEG): container finished" podID="51932f39-1baa-4d43-98ee-a58dccb6251b" containerID="711e01804067f2047c1e8065aa0435b751ea6191ff7b9ebd089313de9b032b2e" exitCode=0 Mar 13 21:00:02 crc kubenswrapper[5029]: I0313 21:00:02.005778 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" event={"ID":"51932f39-1baa-4d43-98ee-a58dccb6251b","Type":"ContainerDied","Data":"711e01804067f2047c1e8065aa0435b751ea6191ff7b9ebd089313de9b032b2e"} Mar 13 21:00:02 crc kubenswrapper[5029]: I0313 21:00:02.005812 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" event={"ID":"51932f39-1baa-4d43-98ee-a58dccb6251b","Type":"ContainerStarted","Data":"c0242d5c871bee9aaf2b1a9fb02714156dd3e0d089d0306808980db90cea8c6e"} Mar 13 21:00:02 crc kubenswrapper[5029]: I0313 21:00:02.600021 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:00:02 crc kubenswrapper[5029]: E0313 21:00:02.601001 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.075179 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" event={"ID":"cafa7079-daee-42e6-818b-32277058379d","Type":"ContainerStarted","Data":"05e150522e57f8450a1c1c1891af549377a243264e95f3258f60dd5c98ccff66"} Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.116945 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" podStartSLOduration=1.527368718 podStartE2EDuration="2.116918017s" podCreationTimestamp="2026-03-13 21:00:01 +0000 UTC" firstStartedPulling="2026-03-13 21:00:01.987810286 +0000 UTC m=+1962.003892689" lastFinishedPulling="2026-03-13 21:00:02.577359585 +0000 UTC m=+1962.593441988" observedRunningTime="2026-03-13 21:00:03.116212788 +0000 UTC m=+1963.132295191" watchObservedRunningTime="2026-03-13 21:00:03.116918017 +0000 UTC m=+1963.133000420" Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.422113 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.583760 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drkmt\" (UniqueName: \"kubernetes.io/projected/51932f39-1baa-4d43-98ee-a58dccb6251b-kube-api-access-drkmt\") pod \"51932f39-1baa-4d43-98ee-a58dccb6251b\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.583974 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51932f39-1baa-4d43-98ee-a58dccb6251b-secret-volume\") pod \"51932f39-1baa-4d43-98ee-a58dccb6251b\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.584044 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51932f39-1baa-4d43-98ee-a58dccb6251b-config-volume\") pod \"51932f39-1baa-4d43-98ee-a58dccb6251b\" (UID: \"51932f39-1baa-4d43-98ee-a58dccb6251b\") " Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.585773 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51932f39-1baa-4d43-98ee-a58dccb6251b-config-volume" (OuterVolumeSpecName: "config-volume") pod "51932f39-1baa-4d43-98ee-a58dccb6251b" (UID: "51932f39-1baa-4d43-98ee-a58dccb6251b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.591749 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51932f39-1baa-4d43-98ee-a58dccb6251b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51932f39-1baa-4d43-98ee-a58dccb6251b" (UID: "51932f39-1baa-4d43-98ee-a58dccb6251b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.592421 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51932f39-1baa-4d43-98ee-a58dccb6251b-kube-api-access-drkmt" (OuterVolumeSpecName: "kube-api-access-drkmt") pod "51932f39-1baa-4d43-98ee-a58dccb6251b" (UID: "51932f39-1baa-4d43-98ee-a58dccb6251b"). InnerVolumeSpecName "kube-api-access-drkmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.687715 5029 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51932f39-1baa-4d43-98ee-a58dccb6251b-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.687815 5029 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51932f39-1baa-4d43-98ee-a58dccb6251b-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:03 crc kubenswrapper[5029]: I0313 21:00:03.687829 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drkmt\" (UniqueName: \"kubernetes.io/projected/51932f39-1baa-4d43-98ee-a58dccb6251b-kube-api-access-drkmt\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:04 crc kubenswrapper[5029]: I0313 21:00:04.088511 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" event={"ID":"51932f39-1baa-4d43-98ee-a58dccb6251b","Type":"ContainerDied","Data":"c0242d5c871bee9aaf2b1a9fb02714156dd3e0d089d0306808980db90cea8c6e"} Mar 13 21:00:04 crc kubenswrapper[5029]: I0313 21:00:04.088963 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0242d5c871bee9aaf2b1a9fb02714156dd3e0d089d0306808980db90cea8c6e" Mar 13 21:00:04 crc kubenswrapper[5029]: I0313 21:00:04.088606 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk" Mar 13 21:00:06 crc kubenswrapper[5029]: I0313 21:00:06.109419 5029 generic.go:334] "Generic (PLEG): container finished" podID="cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a" containerID="c921ed331d998f4e3e9e1a9dc3c8bb1339db688aa9a67c78b1f9ba13c0f90101" exitCode=0 Mar 13 21:00:06 crc kubenswrapper[5029]: I0313 21:00:06.109527 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557260-sbf99" event={"ID":"cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a","Type":"ContainerDied","Data":"c921ed331d998f4e3e9e1a9dc3c8bb1339db688aa9a67c78b1f9ba13c0f90101"} Mar 13 21:00:07 crc kubenswrapper[5029]: I0313 21:00:07.500433 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557260-sbf99" Mar 13 21:00:07 crc kubenswrapper[5029]: I0313 21:00:07.682672 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxggj\" (UniqueName: \"kubernetes.io/projected/cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a-kube-api-access-bxggj\") pod \"cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a\" (UID: \"cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a\") " Mar 13 21:00:07 crc kubenswrapper[5029]: I0313 21:00:07.689041 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a-kube-api-access-bxggj" (OuterVolumeSpecName: "kube-api-access-bxggj") pod "cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a" (UID: "cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a"). InnerVolumeSpecName "kube-api-access-bxggj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:00:07 crc kubenswrapper[5029]: I0313 21:00:07.785847 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxggj\" (UniqueName: \"kubernetes.io/projected/cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a-kube-api-access-bxggj\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:08 crc kubenswrapper[5029]: I0313 21:00:08.132218 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557260-sbf99" event={"ID":"cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a","Type":"ContainerDied","Data":"36af3b458da1d4139108870aae99b912acfb62659e446ec898c0d78092881cd9"} Mar 13 21:00:08 crc kubenswrapper[5029]: I0313 21:00:08.132832 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36af3b458da1d4139108870aae99b912acfb62659e446ec898c0d78092881cd9" Mar 13 21:00:08 crc kubenswrapper[5029]: I0313 21:00:08.132278 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557260-sbf99" Mar 13 21:00:08 crc kubenswrapper[5029]: I0313 21:00:08.134712 5029 generic.go:334] "Generic (PLEG): container finished" podID="cafa7079-daee-42e6-818b-32277058379d" containerID="05e150522e57f8450a1c1c1891af549377a243264e95f3258f60dd5c98ccff66" exitCode=0 Mar 13 21:00:08 crc kubenswrapper[5029]: I0313 21:00:08.134754 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" event={"ID":"cafa7079-daee-42e6-818b-32277058379d","Type":"ContainerDied","Data":"05e150522e57f8450a1c1c1891af549377a243264e95f3258f60dd5c98ccff66"} Mar 13 21:00:08 crc kubenswrapper[5029]: I0313 21:00:08.584011 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557254-psclf"] Mar 13 21:00:08 crc kubenswrapper[5029]: I0313 21:00:08.593601 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557254-psclf"] Mar 13 21:00:08 crc kubenswrapper[5029]: I0313 21:00:08.614944 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a30d71f-d681-4dce-b39f-4e0304fc1a95" path="/var/lib/kubelet/pods/3a30d71f-d681-4dce-b39f-4e0304fc1a95/volumes" Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.540636 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.625106 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbtr2\" (UniqueName: \"kubernetes.io/projected/cafa7079-daee-42e6-818b-32277058379d-kube-api-access-jbtr2\") pod \"cafa7079-daee-42e6-818b-32277058379d\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.625269 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-ssh-key-openstack-edpm-ipam\") pod \"cafa7079-daee-42e6-818b-32277058379d\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.625555 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-inventory\") pod \"cafa7079-daee-42e6-818b-32277058379d\" (UID: \"cafa7079-daee-42e6-818b-32277058379d\") " Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.642336 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafa7079-daee-42e6-818b-32277058379d-kube-api-access-jbtr2" (OuterVolumeSpecName: "kube-api-access-jbtr2") pod "cafa7079-daee-42e6-818b-32277058379d" (UID: "cafa7079-daee-42e6-818b-32277058379d"). InnerVolumeSpecName "kube-api-access-jbtr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.655000 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-inventory" (OuterVolumeSpecName: "inventory") pod "cafa7079-daee-42e6-818b-32277058379d" (UID: "cafa7079-daee-42e6-818b-32277058379d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.658299 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cafa7079-daee-42e6-818b-32277058379d" (UID: "cafa7079-daee-42e6-818b-32277058379d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.728368 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.728415 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbtr2\" (UniqueName: \"kubernetes.io/projected/cafa7079-daee-42e6-818b-32277058379d-kube-api-access-jbtr2\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:09 crc kubenswrapper[5029]: I0313 21:00:09.728426 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafa7079-daee-42e6-818b-32277058379d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.156516 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" event={"ID":"cafa7079-daee-42e6-818b-32277058379d","Type":"ContainerDied","Data":"7e84d35aad67758bce9ef516d7e75e50730a8ee138aadfd863ba034cd0757362"} Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.157143 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e84d35aad67758bce9ef516d7e75e50730a8ee138aadfd863ba034cd0757362" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.156575 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.234537 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5"] Mar 13 21:00:10 crc kubenswrapper[5029]: E0313 21:00:10.235063 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51932f39-1baa-4d43-98ee-a58dccb6251b" containerName="collect-profiles" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.235111 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="51932f39-1baa-4d43-98ee-a58dccb6251b" containerName="collect-profiles" Mar 13 21:00:10 crc kubenswrapper[5029]: E0313 21:00:10.235148 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafa7079-daee-42e6-818b-32277058379d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.235160 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafa7079-daee-42e6-818b-32277058379d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:10 crc kubenswrapper[5029]: E0313 21:00:10.235177 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a" containerName="oc" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.235185 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a" containerName="oc" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.235445 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="51932f39-1baa-4d43-98ee-a58dccb6251b" containerName="collect-profiles" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.235484 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafa7079-daee-42e6-818b-32277058379d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.235504 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a" containerName="oc" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.236490 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.239231 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.239615 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.239790 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.241431 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.276206 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5"] Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.343989 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.344139 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbkw\" (UniqueName: \"kubernetes.io/projected/6156a413-1c34-4e41-888b-7e0f9cd0dd61-kube-api-access-zxbkw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.344271 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.447348 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.447481 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxbkw\" (UniqueName: \"kubernetes.io/projected/6156a413-1c34-4e41-888b-7e0f9cd0dd61-kube-api-access-zxbkw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.447624 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.452002 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.452248 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.473280 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxbkw\" (UniqueName: \"kubernetes.io/projected/6156a413-1c34-4e41-888b-7e0f9cd0dd61-kube-api-access-zxbkw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bvdg5\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.565852 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.947401 5029 scope.go:117] "RemoveContainer" containerID="24df77b155f847d4bcfc3c6cde67d1e0e0eeea2cfb3bba52cef98d37b9c28f3a" Mar 13 21:00:10 crc kubenswrapper[5029]: I0313 21:00:10.980395 5029 scope.go:117] "RemoveContainer" containerID="eb363144f5c395245f0b4d6a0c350450122f7f56b89bdb7dd188585b3c859838" Mar 13 21:00:11 crc kubenswrapper[5029]: I0313 21:00:11.041427 5029 scope.go:117] "RemoveContainer" containerID="2518d599c9eafbf0c88c9e873be2a900214d81a987e1b52b189ec9503eb66d7b" Mar 13 21:00:11 crc kubenswrapper[5029]: I0313 21:00:11.101419 5029 scope.go:117] "RemoveContainer" containerID="7861a777c5174a282b3807ef824a4d516283ea7280d71642c0d0819694271996" Mar 13 21:00:11 crc kubenswrapper[5029]: I0313 21:00:11.140692 5029 scope.go:117] "RemoveContainer" containerID="bb6621cafaff21691e905a7a332368bcd78b04c74bea53ae656f96b343a4c154" Mar 13 21:00:11 crc kubenswrapper[5029]: I0313 21:00:11.154705 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5"] Mar 13 21:00:12 crc kubenswrapper[5029]: I0313 21:00:12.193515 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" event={"ID":"6156a413-1c34-4e41-888b-7e0f9cd0dd61","Type":"ContainerStarted","Data":"96467ffd3944d9a2bd619d5ae88e0bbb37af16029a9870478f632137157e0a99"} Mar 13 21:00:12 crc kubenswrapper[5029]: I0313 21:00:12.194100 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" event={"ID":"6156a413-1c34-4e41-888b-7e0f9cd0dd61","Type":"ContainerStarted","Data":"a4a444f2168d65ff69c0beb2da724abe7fc00d205843fa586e86bf19afffc5b9"} Mar 13 21:00:12 crc kubenswrapper[5029]: I0313 21:00:12.216109 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" podStartSLOduration=1.780953769 podStartE2EDuration="2.21608774s" podCreationTimestamp="2026-03-13 21:00:10 +0000 UTC" firstStartedPulling="2026-03-13 21:00:11.183509085 +0000 UTC m=+1971.199591488" lastFinishedPulling="2026-03-13 21:00:11.618643056 +0000 UTC m=+1971.634725459" observedRunningTime="2026-03-13 21:00:12.214536948 +0000 UTC m=+1972.230619371" watchObservedRunningTime="2026-03-13 21:00:12.21608774 +0000 UTC m=+1972.232170143" Mar 13 21:00:14 crc kubenswrapper[5029]: I0313 21:00:14.600173 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:00:14 crc kubenswrapper[5029]: E0313 21:00:14.600983 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:00:28 crc kubenswrapper[5029]: I0313 21:00:28.599892 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:00:28 crc kubenswrapper[5029]: E0313 21:00:28.600737 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:00:32 crc kubenswrapper[5029]: I0313 21:00:32.056919 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fffm7"] Mar 13 21:00:32 crc kubenswrapper[5029]: I0313 21:00:32.070646 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fffm7"] Mar 13 21:00:32 crc kubenswrapper[5029]: I0313 21:00:32.614508 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98b2e2a-db84-4220-ad1a-5e0e8a867b68" path="/var/lib/kubelet/pods/f98b2e2a-db84-4220-ad1a-5e0e8a867b68/volumes" Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.034872 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jpvh9"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.045382 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-da29-account-create-update-4x25g"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.054572 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jpvh9"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.064001 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-g9qh5"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.073401 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1524-account-create-update-nlmgn"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.082371 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8aef-account-create-update-kkxb4"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.091006 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1524-account-create-update-nlmgn"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.100619 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-da29-account-create-update-4x25g"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.109586 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-g9qh5"] Mar 13 21:00:33 crc kubenswrapper[5029]: I0313 21:00:33.119190 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8aef-account-create-update-kkxb4"] Mar 13 21:00:34 crc kubenswrapper[5029]: I0313 21:00:34.611970 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12bb1159-3ba4-45dd-8bc3-26382b17baf5" path="/var/lib/kubelet/pods/12bb1159-3ba4-45dd-8bc3-26382b17baf5/volumes" Mar 13 21:00:34 crc kubenswrapper[5029]: I0313 21:00:34.613216 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2043c096-6123-44c7-90f8-b91a70523471" path="/var/lib/kubelet/pods/2043c096-6123-44c7-90f8-b91a70523471/volumes" Mar 13 21:00:34 crc kubenswrapper[5029]: I0313 21:00:34.613893 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af79719-de27-49b3-aa21-401419db6fc3" path="/var/lib/kubelet/pods/5af79719-de27-49b3-aa21-401419db6fc3/volumes" Mar 13 21:00:34 crc kubenswrapper[5029]: I0313 21:00:34.614532 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e85f8e-05b4-4780-87f4-df861db34de7" path="/var/lib/kubelet/pods/90e85f8e-05b4-4780-87f4-df861db34de7/volumes" Mar 13 21:00:34 crc kubenswrapper[5029]: I0313 21:00:34.616256 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90c73b6-45b6-4a3e-bd24-4bb1873b73cd" path="/var/lib/kubelet/pods/c90c73b6-45b6-4a3e-bd24-4bb1873b73cd/volumes" Mar 13 21:00:43 crc kubenswrapper[5029]: I0313 21:00:43.600693 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:00:43 crc kubenswrapper[5029]: E0313 21:00:43.602088 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.015557 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nzgn4"] Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.022595 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.029508 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nzgn4"] Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.209886 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-utilities\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.210085 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xd8\" (UniqueName: \"kubernetes.io/projected/91025144-5544-42a0-8c8d-d80e17574d91-kube-api-access-29xd8\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.210120 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-catalog-content\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.312224 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-utilities\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.312389 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xd8\" (UniqueName: \"kubernetes.io/projected/91025144-5544-42a0-8c8d-d80e17574d91-kube-api-access-29xd8\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.312417 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-catalog-content\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.312941 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-utilities\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.312970 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-catalog-content\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.342949 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xd8\" (UniqueName: \"kubernetes.io/projected/91025144-5544-42a0-8c8d-d80e17574d91-kube-api-access-29xd8\") pod \"certified-operators-nzgn4\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.348462 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:46 crc kubenswrapper[5029]: I0313 21:00:46.892040 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nzgn4"] Mar 13 21:00:47 crc kubenswrapper[5029]: I0313 21:00:47.560447 5029 generic.go:334] "Generic (PLEG): container finished" podID="91025144-5544-42a0-8c8d-d80e17574d91" containerID="0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e" exitCode=0 Mar 13 21:00:47 crc kubenswrapper[5029]: I0313 21:00:47.560805 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzgn4" event={"ID":"91025144-5544-42a0-8c8d-d80e17574d91","Type":"ContainerDied","Data":"0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e"} Mar 13 21:00:47 crc kubenswrapper[5029]: I0313 21:00:47.560876 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzgn4" event={"ID":"91025144-5544-42a0-8c8d-d80e17574d91","Type":"ContainerStarted","Data":"6d0a18a33addce5846f69beeee70ae41bbfc27f60572d03f7dce752050efbb95"} Mar 13 21:00:48 crc kubenswrapper[5029]: I0313 21:00:48.572558 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzgn4" event={"ID":"91025144-5544-42a0-8c8d-d80e17574d91","Type":"ContainerStarted","Data":"140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b"} Mar 13 21:00:49 crc kubenswrapper[5029]: I0313 21:00:49.584050 5029 generic.go:334] "Generic (PLEG): container finished" podID="91025144-5544-42a0-8c8d-d80e17574d91" containerID="140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b" exitCode=0 Mar 13 21:00:49 crc kubenswrapper[5029]: I0313 21:00:49.584145 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzgn4" event={"ID":"91025144-5544-42a0-8c8d-d80e17574d91","Type":"ContainerDied","Data":"140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b"} Mar 13 21:00:50 crc kubenswrapper[5029]: I0313 21:00:50.604358 5029 generic.go:334] "Generic (PLEG): container finished" podID="6156a413-1c34-4e41-888b-7e0f9cd0dd61" containerID="96467ffd3944d9a2bd619d5ae88e0bbb37af16029a9870478f632137157e0a99" exitCode=0 Mar 13 21:00:50 crc kubenswrapper[5029]: I0313 21:00:50.614993 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzgn4" event={"ID":"91025144-5544-42a0-8c8d-d80e17574d91","Type":"ContainerStarted","Data":"b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632"} Mar 13 21:00:50 crc kubenswrapper[5029]: I0313 21:00:50.615061 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" event={"ID":"6156a413-1c34-4e41-888b-7e0f9cd0dd61","Type":"ContainerDied","Data":"96467ffd3944d9a2bd619d5ae88e0bbb37af16029a9870478f632137157e0a99"} Mar 13 21:00:50 crc kubenswrapper[5029]: I0313 21:00:50.647204 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nzgn4" podStartSLOduration=3.171392449 podStartE2EDuration="5.647164523s" podCreationTimestamp="2026-03-13 21:00:45 +0000 UTC" firstStartedPulling="2026-03-13 21:00:47.562740699 +0000 UTC m=+2007.578823112" lastFinishedPulling="2026-03-13 21:00:50.038512793 +0000 UTC m=+2010.054595186" observedRunningTime="2026-03-13 21:00:50.63682621 +0000 UTC m=+2010.652908623" watchObservedRunningTime="2026-03-13 21:00:50.647164523 +0000 UTC m=+2010.663246926" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.096388 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.250168 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxbkw\" (UniqueName: \"kubernetes.io/projected/6156a413-1c34-4e41-888b-7e0f9cd0dd61-kube-api-access-zxbkw\") pod \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.250282 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-ssh-key-openstack-edpm-ipam\") pod \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.250377 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-inventory\") pod \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\" (UID: \"6156a413-1c34-4e41-888b-7e0f9cd0dd61\") " Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.257304 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6156a413-1c34-4e41-888b-7e0f9cd0dd61-kube-api-access-zxbkw" (OuterVolumeSpecName: "kube-api-access-zxbkw") pod "6156a413-1c34-4e41-888b-7e0f9cd0dd61" (UID: "6156a413-1c34-4e41-888b-7e0f9cd0dd61"). InnerVolumeSpecName "kube-api-access-zxbkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.285487 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-inventory" (OuterVolumeSpecName: "inventory") pod "6156a413-1c34-4e41-888b-7e0f9cd0dd61" (UID: "6156a413-1c34-4e41-888b-7e0f9cd0dd61"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.299290 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6156a413-1c34-4e41-888b-7e0f9cd0dd61" (UID: "6156a413-1c34-4e41-888b-7e0f9cd0dd61"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.353559 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxbkw\" (UniqueName: \"kubernetes.io/projected/6156a413-1c34-4e41-888b-7e0f9cd0dd61-kube-api-access-zxbkw\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.353614 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.353627 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6156a413-1c34-4e41-888b-7e0f9cd0dd61-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.628481 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" event={"ID":"6156a413-1c34-4e41-888b-7e0f9cd0dd61","Type":"ContainerDied","Data":"a4a444f2168d65ff69c0beb2da724abe7fc00d205843fa586e86bf19afffc5b9"} Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.628540 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4a444f2168d65ff69c0beb2da724abe7fc00d205843fa586e86bf19afffc5b9" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.628583 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bvdg5" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.782586 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d"] Mar 13 21:00:52 crc kubenswrapper[5029]: E0313 21:00:52.783213 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6156a413-1c34-4e41-888b-7e0f9cd0dd61" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.783240 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="6156a413-1c34-4e41-888b-7e0f9cd0dd61" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.783496 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="6156a413-1c34-4e41-888b-7e0f9cd0dd61" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.784405 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.789931 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.790174 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.792019 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.794142 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.817243 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d"] Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.865567 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.865686 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.865718 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpsjb\" (UniqueName: \"kubernetes.io/projected/65396eef-a783-4de6-9a3f-78632ce797c3-kube-api-access-kpsjb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.967679 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.967838 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.967911 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpsjb\" (UniqueName: \"kubernetes.io/projected/65396eef-a783-4de6-9a3f-78632ce797c3-kube-api-access-kpsjb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.972312 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.975084 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:52 crc kubenswrapper[5029]: I0313 21:00:52.985165 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpsjb\" (UniqueName: \"kubernetes.io/projected/65396eef-a783-4de6-9a3f-78632ce797c3-kube-api-access-kpsjb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:53 crc kubenswrapper[5029]: I0313 21:00:53.107316 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:00:53 crc kubenswrapper[5029]: I0313 21:00:53.666970 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d"] Mar 13 21:00:54 crc kubenswrapper[5029]: I0313 21:00:54.601113 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:00:54 crc kubenswrapper[5029]: E0313 21:00:54.605665 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:00:54 crc kubenswrapper[5029]: I0313 21:00:54.649470 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" event={"ID":"65396eef-a783-4de6-9a3f-78632ce797c3","Type":"ContainerStarted","Data":"8488080581fc00ab5176bc02019f518321bdb2957b61e00b83d0158ad56afa69"} Mar 13 21:00:54 crc kubenswrapper[5029]: I0313 21:00:54.649532 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" event={"ID":"65396eef-a783-4de6-9a3f-78632ce797c3","Type":"ContainerStarted","Data":"055d8b8042d018fae0216b6d5d0b298d69d03129e058c7500f32769b3c305365"} Mar 13 21:00:54 crc kubenswrapper[5029]: I0313 21:00:54.676425 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" podStartSLOduration=2.006955565 podStartE2EDuration="2.676401855s" podCreationTimestamp="2026-03-13 21:00:52 +0000 UTC" firstStartedPulling="2026-03-13 21:00:53.667363822 +0000 UTC m=+2013.683446225" lastFinishedPulling="2026-03-13 21:00:54.336810112 +0000 UTC m=+2014.352892515" observedRunningTime="2026-03-13 21:00:54.666526436 +0000 UTC m=+2014.682608859" watchObservedRunningTime="2026-03-13 21:00:54.676401855 +0000 UTC m=+2014.692484258" Mar 13 21:00:56 crc kubenswrapper[5029]: I0313 21:00:56.349103 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:56 crc kubenswrapper[5029]: I0313 21:00:56.349570 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:56 crc kubenswrapper[5029]: I0313 21:00:56.400710 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:56 crc kubenswrapper[5029]: I0313 21:00:56.711044 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:56 crc kubenswrapper[5029]: I0313 21:00:56.771233 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nzgn4"] Mar 13 21:00:58 crc kubenswrapper[5029]: I0313 21:00:58.689296 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nzgn4" podUID="91025144-5544-42a0-8c8d-d80e17574d91" containerName="registry-server" containerID="cri-o://b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632" gracePeriod=2 Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.190592 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.325099 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-utilities\") pod \"91025144-5544-42a0-8c8d-d80e17574d91\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.325167 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29xd8\" (UniqueName: \"kubernetes.io/projected/91025144-5544-42a0-8c8d-d80e17574d91-kube-api-access-29xd8\") pod \"91025144-5544-42a0-8c8d-d80e17574d91\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.325205 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-catalog-content\") pod \"91025144-5544-42a0-8c8d-d80e17574d91\" (UID: \"91025144-5544-42a0-8c8d-d80e17574d91\") " Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.326700 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-utilities" (OuterVolumeSpecName: "utilities") pod "91025144-5544-42a0-8c8d-d80e17574d91" (UID: "91025144-5544-42a0-8c8d-d80e17574d91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.341492 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91025144-5544-42a0-8c8d-d80e17574d91-kube-api-access-29xd8" (OuterVolumeSpecName: "kube-api-access-29xd8") pod "91025144-5544-42a0-8c8d-d80e17574d91" (UID: "91025144-5544-42a0-8c8d-d80e17574d91"). InnerVolumeSpecName "kube-api-access-29xd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.398606 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91025144-5544-42a0-8c8d-d80e17574d91" (UID: "91025144-5544-42a0-8c8d-d80e17574d91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.428328 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.428593 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29xd8\" (UniqueName: \"kubernetes.io/projected/91025144-5544-42a0-8c8d-d80e17574d91-kube-api-access-29xd8\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.428667 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91025144-5544-42a0-8c8d-d80e17574d91-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.704252 5029 generic.go:334] "Generic (PLEG): container finished" podID="91025144-5544-42a0-8c8d-d80e17574d91" containerID="b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632" exitCode=0 Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.704308 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzgn4" event={"ID":"91025144-5544-42a0-8c8d-d80e17574d91","Type":"ContainerDied","Data":"b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632"} Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.704344 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzgn4" event={"ID":"91025144-5544-42a0-8c8d-d80e17574d91","Type":"ContainerDied","Data":"6d0a18a33addce5846f69beeee70ae41bbfc27f60572d03f7dce752050efbb95"} Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.704359 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzgn4" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.704364 5029 scope.go:117] "RemoveContainer" containerID="b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.728945 5029 scope.go:117] "RemoveContainer" containerID="140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.752878 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nzgn4"] Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.766229 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nzgn4"] Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.771822 5029 scope.go:117] "RemoveContainer" containerID="0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.812467 5029 scope.go:117] "RemoveContainer" containerID="b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632" Mar 13 21:00:59 crc kubenswrapper[5029]: E0313 21:00:59.813202 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632\": container with ID starting with b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632 not found: ID does not exist" containerID="b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.813241 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632"} err="failed to get container status \"b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632\": rpc error: code = NotFound desc = could not find container \"b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632\": container with ID starting with b22db3af1d744d92651970067fe4a980b89b15a92a75c4f6321846ddfbfd6632 not found: ID does not exist" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.813264 5029 scope.go:117] "RemoveContainer" containerID="140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b" Mar 13 21:00:59 crc kubenswrapper[5029]: E0313 21:00:59.813676 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b\": container with ID starting with 140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b not found: ID does not exist" containerID="140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.813704 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b"} err="failed to get container status \"140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b\": rpc error: code = NotFound desc = could not find container \"140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b\": container with ID starting with 140c37d37d3020066574d1ae0b06d4abdf77790a0ea7cf4c4bc6370b01c00b3b not found: ID does not exist" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.813717 5029 scope.go:117] "RemoveContainer" containerID="0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e" Mar 13 21:00:59 crc kubenswrapper[5029]: E0313 21:00:59.814158 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e\": container with ID starting with 0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e not found: ID does not exist" containerID="0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e" Mar 13 21:00:59 crc kubenswrapper[5029]: I0313 21:00:59.814186 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e"} err="failed to get container status \"0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e\": rpc error: code = NotFound desc = could not find container \"0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e\": container with ID starting with 0677b60c592f7d13c0b0f52b8e75f9a34024ee4e41343554f003a5a4cfc33f2e not found: ID does not exist" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.141227 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29557261-h5hn8"] Mar 13 21:01:00 crc kubenswrapper[5029]: E0313 21:01:00.141806 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91025144-5544-42a0-8c8d-d80e17574d91" containerName="extract-content" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.141833 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="91025144-5544-42a0-8c8d-d80e17574d91" containerName="extract-content" Mar 13 21:01:00 crc kubenswrapper[5029]: E0313 21:01:00.141893 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91025144-5544-42a0-8c8d-d80e17574d91" containerName="extract-utilities" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.141901 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="91025144-5544-42a0-8c8d-d80e17574d91" containerName="extract-utilities" Mar 13 21:01:00 crc kubenswrapper[5029]: E0313 21:01:00.141919 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91025144-5544-42a0-8c8d-d80e17574d91" containerName="registry-server" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.141926 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="91025144-5544-42a0-8c8d-d80e17574d91" containerName="registry-server" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.142140 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="91025144-5544-42a0-8c8d-d80e17574d91" containerName="registry-server" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.144334 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.154298 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29557261-h5hn8"] Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.255584 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-combined-ca-bundle\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.255838 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-fernet-keys\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.256448 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwf28\" (UniqueName: \"kubernetes.io/projected/1d45d6d4-22ee-43ee-af88-5259795bbf30-kube-api-access-vwf28\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.256707 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-config-data\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.359415 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwf28\" (UniqueName: \"kubernetes.io/projected/1d45d6d4-22ee-43ee-af88-5259795bbf30-kube-api-access-vwf28\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.359577 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-config-data\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.359743 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-combined-ca-bundle\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.359815 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-fernet-keys\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.365221 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-config-data\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.366606 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-fernet-keys\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.367431 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-combined-ca-bundle\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.383882 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwf28\" (UniqueName: \"kubernetes.io/projected/1d45d6d4-22ee-43ee-af88-5259795bbf30-kube-api-access-vwf28\") pod \"keystone-cron-29557261-h5hn8\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.462449 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.622214 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91025144-5544-42a0-8c8d-d80e17574d91" path="/var/lib/kubelet/pods/91025144-5544-42a0-8c8d-d80e17574d91/volumes" Mar 13 21:01:00 crc kubenswrapper[5029]: I0313 21:01:00.948869 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29557261-h5hn8"] Mar 13 21:01:01 crc kubenswrapper[5029]: I0313 21:01:01.742545 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557261-h5hn8" event={"ID":"1d45d6d4-22ee-43ee-af88-5259795bbf30","Type":"ContainerStarted","Data":"6462ec51dfa42d6ffa69ef9d122ea170840318b1b459f730460c5df9c03977ff"} Mar 13 21:01:01 crc kubenswrapper[5029]: I0313 21:01:01.742919 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557261-h5hn8" event={"ID":"1d45d6d4-22ee-43ee-af88-5259795bbf30","Type":"ContainerStarted","Data":"041f16fd32496b61e6f2ef28899feb5ff3c592428568db6ed361e79f6b00b46a"} Mar 13 21:01:01 crc kubenswrapper[5029]: I0313 21:01:01.765549 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29557261-h5hn8" podStartSLOduration=1.765521591 podStartE2EDuration="1.765521591s" podCreationTimestamp="2026-03-13 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 21:01:01.760840213 +0000 UTC m=+2021.776922616" watchObservedRunningTime="2026-03-13 21:01:01.765521591 +0000 UTC m=+2021.781603994" Mar 13 21:01:03 crc kubenswrapper[5029]: I0313 21:01:03.056539 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mwzf9"] Mar 13 21:01:03 crc kubenswrapper[5029]: I0313 21:01:03.073758 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mwzf9"] Mar 13 21:01:03 crc kubenswrapper[5029]: I0313 21:01:03.766061 5029 generic.go:334] "Generic (PLEG): container finished" podID="1d45d6d4-22ee-43ee-af88-5259795bbf30" containerID="6462ec51dfa42d6ffa69ef9d122ea170840318b1b459f730460c5df9c03977ff" exitCode=0 Mar 13 21:01:03 crc kubenswrapper[5029]: I0313 21:01:03.766553 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557261-h5hn8" event={"ID":"1d45d6d4-22ee-43ee-af88-5259795bbf30","Type":"ContainerDied","Data":"6462ec51dfa42d6ffa69ef9d122ea170840318b1b459f730460c5df9c03977ff"} Mar 13 21:01:04 crc kubenswrapper[5029]: I0313 21:01:04.612833 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00b4b9eb-002c-49a2-89ef-65fcf9fd4a32" path="/var/lib/kubelet/pods/00b4b9eb-002c-49a2-89ef-65fcf9fd4a32/volumes" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.167625 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.280096 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-combined-ca-bundle\") pod \"1d45d6d4-22ee-43ee-af88-5259795bbf30\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.280204 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-config-data\") pod \"1d45d6d4-22ee-43ee-af88-5259795bbf30\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.280315 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwf28\" (UniqueName: \"kubernetes.io/projected/1d45d6d4-22ee-43ee-af88-5259795bbf30-kube-api-access-vwf28\") pod \"1d45d6d4-22ee-43ee-af88-5259795bbf30\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.280411 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-fernet-keys\") pod \"1d45d6d4-22ee-43ee-af88-5259795bbf30\" (UID: \"1d45d6d4-22ee-43ee-af88-5259795bbf30\") " Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.287688 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d45d6d4-22ee-43ee-af88-5259795bbf30-kube-api-access-vwf28" (OuterVolumeSpecName: "kube-api-access-vwf28") pod "1d45d6d4-22ee-43ee-af88-5259795bbf30" (UID: "1d45d6d4-22ee-43ee-af88-5259795bbf30"). InnerVolumeSpecName "kube-api-access-vwf28". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.288106 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1d45d6d4-22ee-43ee-af88-5259795bbf30" (UID: "1d45d6d4-22ee-43ee-af88-5259795bbf30"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.311965 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d45d6d4-22ee-43ee-af88-5259795bbf30" (UID: "1d45d6d4-22ee-43ee-af88-5259795bbf30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.339480 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-config-data" (OuterVolumeSpecName: "config-data") pod "1d45d6d4-22ee-43ee-af88-5259795bbf30" (UID: "1d45d6d4-22ee-43ee-af88-5259795bbf30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.383440 5029 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.383488 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.383505 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d45d6d4-22ee-43ee-af88-5259795bbf30-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.383518 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwf28\" (UniqueName: \"kubernetes.io/projected/1d45d6d4-22ee-43ee-af88-5259795bbf30-kube-api-access-vwf28\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.788188 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557261-h5hn8" event={"ID":"1d45d6d4-22ee-43ee-af88-5259795bbf30","Type":"ContainerDied","Data":"041f16fd32496b61e6f2ef28899feb5ff3c592428568db6ed361e79f6b00b46a"} Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.788738 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="041f16fd32496b61e6f2ef28899feb5ff3c592428568db6ed361e79f6b00b46a" Mar 13 21:01:05 crc kubenswrapper[5029]: I0313 21:01:05.788519 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557261-h5hn8" Mar 13 21:01:08 crc kubenswrapper[5029]: I0313 21:01:08.600060 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:01:08 crc kubenswrapper[5029]: E0313 21:01:08.600610 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:01:11 crc kubenswrapper[5029]: I0313 21:01:11.298051 5029 scope.go:117] "RemoveContainer" containerID="1eddce6f03cd99f2710df6b86936af86529b10595cc42b1e643a92fe58302af9" Mar 13 21:01:11 crc kubenswrapper[5029]: I0313 21:01:11.321182 5029 scope.go:117] "RemoveContainer" containerID="aecdc340dff1a05a4b04259d24da344591b88d8816c641d63d8f444c918f2d5d" Mar 13 21:01:11 crc kubenswrapper[5029]: I0313 21:01:11.370139 5029 scope.go:117] "RemoveContainer" containerID="9ffc4c109d12644f0be852f0c2118230b7eca8481e488e657056375dd3c6afc2" Mar 13 21:01:11 crc kubenswrapper[5029]: I0313 21:01:11.419242 5029 scope.go:117] "RemoveContainer" containerID="f4bce576fd371bc94e26b2ec106bf9d433ffe799345f5e05a5f619aaa12aabdd" Mar 13 21:01:11 crc kubenswrapper[5029]: I0313 21:01:11.474046 5029 scope.go:117] "RemoveContainer" containerID="859ad1074b9cde91a7e3200a3bdbcebfda615090bdf40dc598153476471c4232" Mar 13 21:01:11 crc kubenswrapper[5029]: I0313 21:01:11.523409 5029 scope.go:117] "RemoveContainer" containerID="ce4533e15a29f2473cb0ee6798c4596c018d71abcc8b7a0ed51b74ce156e4024" Mar 13 21:01:11 crc kubenswrapper[5029]: I0313 21:01:11.599074 5029 scope.go:117] "RemoveContainer" containerID="21e79b1a579aa9f29541504aa14c4d90fc0b033f370b63f35fc7cbd54b9da387" Mar 13 21:01:19 crc kubenswrapper[5029]: I0313 21:01:19.599499 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:01:19 crc kubenswrapper[5029]: E0313 21:01:19.601293 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:01:30 crc kubenswrapper[5029]: I0313 21:01:30.076233 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nds9k"] Mar 13 21:01:30 crc kubenswrapper[5029]: I0313 21:01:30.087102 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nhlxw"] Mar 13 21:01:30 crc kubenswrapper[5029]: I0313 21:01:30.116299 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nhlxw"] Mar 13 21:01:30 crc kubenswrapper[5029]: I0313 21:01:30.132252 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nds9k"] Mar 13 21:01:30 crc kubenswrapper[5029]: I0313 21:01:30.608420 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:01:30 crc kubenswrapper[5029]: E0313 21:01:30.608900 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:01:30 crc kubenswrapper[5029]: I0313 21:01:30.622563 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9a1a6da-0bb6-4002-96f3-2b4275db33f0" path="/var/lib/kubelet/pods/b9a1a6da-0bb6-4002-96f3-2b4275db33f0/volumes" Mar 13 21:01:30 crc kubenswrapper[5029]: I0313 21:01:30.624168 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b9ff74-525b-4376-91b3-8ca127d7174a" path="/var/lib/kubelet/pods/c8b9ff74-525b-4376-91b3-8ca127d7174a/volumes" Mar 13 21:01:44 crc kubenswrapper[5029]: I0313 21:01:44.600045 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:01:44 crc kubenswrapper[5029]: E0313 21:01:44.601193 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:01:46 crc kubenswrapper[5029]: I0313 21:01:46.198142 5029 generic.go:334] "Generic (PLEG): container finished" podID="65396eef-a783-4de6-9a3f-78632ce797c3" containerID="8488080581fc00ab5176bc02019f518321bdb2957b61e00b83d0158ad56afa69" exitCode=0 Mar 13 21:01:46 crc kubenswrapper[5029]: I0313 21:01:46.198253 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" event={"ID":"65396eef-a783-4de6-9a3f-78632ce797c3","Type":"ContainerDied","Data":"8488080581fc00ab5176bc02019f518321bdb2957b61e00b83d0158ad56afa69"} Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.712241 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.832409 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-inventory\") pod \"65396eef-a783-4de6-9a3f-78632ce797c3\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.832588 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-ssh-key-openstack-edpm-ipam\") pod \"65396eef-a783-4de6-9a3f-78632ce797c3\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.832752 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpsjb\" (UniqueName: \"kubernetes.io/projected/65396eef-a783-4de6-9a3f-78632ce797c3-kube-api-access-kpsjb\") pod \"65396eef-a783-4de6-9a3f-78632ce797c3\" (UID: \"65396eef-a783-4de6-9a3f-78632ce797c3\") " Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.839607 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65396eef-a783-4de6-9a3f-78632ce797c3-kube-api-access-kpsjb" (OuterVolumeSpecName: "kube-api-access-kpsjb") pod "65396eef-a783-4de6-9a3f-78632ce797c3" (UID: "65396eef-a783-4de6-9a3f-78632ce797c3"). InnerVolumeSpecName "kube-api-access-kpsjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.864076 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-inventory" (OuterVolumeSpecName: "inventory") pod "65396eef-a783-4de6-9a3f-78632ce797c3" (UID: "65396eef-a783-4de6-9a3f-78632ce797c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.864085 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "65396eef-a783-4de6-9a3f-78632ce797c3" (UID: "65396eef-a783-4de6-9a3f-78632ce797c3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.935241 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpsjb\" (UniqueName: \"kubernetes.io/projected/65396eef-a783-4de6-9a3f-78632ce797c3-kube-api-access-kpsjb\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.935507 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:47 crc kubenswrapper[5029]: I0313 21:01:47.935601 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65396eef-a783-4de6-9a3f-78632ce797c3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.223148 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" event={"ID":"65396eef-a783-4de6-9a3f-78632ce797c3","Type":"ContainerDied","Data":"055d8b8042d018fae0216b6d5d0b298d69d03129e058c7500f32769b3c305365"} Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.223196 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="055d8b8042d018fae0216b6d5d0b298d69d03129e058c7500f32769b3c305365" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.223203 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.331127 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cwkds"] Mar 13 21:01:48 crc kubenswrapper[5029]: E0313 21:01:48.331678 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d45d6d4-22ee-43ee-af88-5259795bbf30" containerName="keystone-cron" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.331704 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d45d6d4-22ee-43ee-af88-5259795bbf30" containerName="keystone-cron" Mar 13 21:01:48 crc kubenswrapper[5029]: E0313 21:01:48.331753 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65396eef-a783-4de6-9a3f-78632ce797c3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.331762 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="65396eef-a783-4de6-9a3f-78632ce797c3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.332034 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="65396eef-a783-4de6-9a3f-78632ce797c3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.332072 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d45d6d4-22ee-43ee-af88-5259795bbf30" containerName="keystone-cron" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.332936 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.335955 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.336027 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.336700 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.339789 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.344243 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cwkds"] Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.467915 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.468242 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5kv\" (UniqueName: \"kubernetes.io/projected/c718816f-d85a-4401-ac91-2365bffde224-kube-api-access-bj5kv\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.468403 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.570612 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.570701 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj5kv\" (UniqueName: \"kubernetes.io/projected/c718816f-d85a-4401-ac91-2365bffde224-kube-api-access-bj5kv\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.570730 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.576999 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.584001 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.588584 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj5kv\" (UniqueName: \"kubernetes.io/projected/c718816f-d85a-4401-ac91-2365bffde224-kube-api-access-bj5kv\") pod \"ssh-known-hosts-edpm-deployment-cwkds\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:48 crc kubenswrapper[5029]: I0313 21:01:48.654791 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:49 crc kubenswrapper[5029]: I0313 21:01:49.025810 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cwkds"] Mar 13 21:01:49 crc kubenswrapper[5029]: I0313 21:01:49.235331 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" event={"ID":"c718816f-d85a-4401-ac91-2365bffde224","Type":"ContainerStarted","Data":"8d981533796023be7d06a836adc4a97510b0f9cd3deddae77f570d7d4ca00895"} Mar 13 21:01:50 crc kubenswrapper[5029]: I0313 21:01:50.247768 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" event={"ID":"c718816f-d85a-4401-ac91-2365bffde224","Type":"ContainerStarted","Data":"b51cd7ba68cbb82392a6712ce806e3cb571d9073f6b6d1aa08d30e62fc75eb5b"} Mar 13 21:01:50 crc kubenswrapper[5029]: I0313 21:01:50.272765 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" podStartSLOduration=1.8492129579999999 podStartE2EDuration="2.272723693s" podCreationTimestamp="2026-03-13 21:01:48 +0000 UTC" firstStartedPulling="2026-03-13 21:01:49.036144977 +0000 UTC m=+2069.052227380" lastFinishedPulling="2026-03-13 21:01:49.459655712 +0000 UTC m=+2069.475738115" observedRunningTime="2026-03-13 21:01:50.265964769 +0000 UTC m=+2070.282047192" watchObservedRunningTime="2026-03-13 21:01:50.272723693 +0000 UTC m=+2070.288806096" Mar 13 21:01:57 crc kubenswrapper[5029]: I0313 21:01:57.308968 5029 generic.go:334] "Generic (PLEG): container finished" podID="c718816f-d85a-4401-ac91-2365bffde224" containerID="b51cd7ba68cbb82392a6712ce806e3cb571d9073f6b6d1aa08d30e62fc75eb5b" exitCode=0 Mar 13 21:01:57 crc kubenswrapper[5029]: I0313 21:01:57.309076 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" event={"ID":"c718816f-d85a-4401-ac91-2365bffde224","Type":"ContainerDied","Data":"b51cd7ba68cbb82392a6712ce806e3cb571d9073f6b6d1aa08d30e62fc75eb5b"} Mar 13 21:01:58 crc kubenswrapper[5029]: I0313 21:01:58.746821 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:58 crc kubenswrapper[5029]: I0313 21:01:58.901150 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-ssh-key-openstack-edpm-ipam\") pod \"c718816f-d85a-4401-ac91-2365bffde224\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " Mar 13 21:01:58 crc kubenswrapper[5029]: I0313 21:01:58.901511 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-inventory-0\") pod \"c718816f-d85a-4401-ac91-2365bffde224\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " Mar 13 21:01:58 crc kubenswrapper[5029]: I0313 21:01:58.901746 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj5kv\" (UniqueName: \"kubernetes.io/projected/c718816f-d85a-4401-ac91-2365bffde224-kube-api-access-bj5kv\") pod \"c718816f-d85a-4401-ac91-2365bffde224\" (UID: \"c718816f-d85a-4401-ac91-2365bffde224\") " Mar 13 21:01:58 crc kubenswrapper[5029]: I0313 21:01:58.906507 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c718816f-d85a-4401-ac91-2365bffde224-kube-api-access-bj5kv" (OuterVolumeSpecName: "kube-api-access-bj5kv") pod "c718816f-d85a-4401-ac91-2365bffde224" (UID: "c718816f-d85a-4401-ac91-2365bffde224"). InnerVolumeSpecName "kube-api-access-bj5kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:01:58 crc kubenswrapper[5029]: I0313 21:01:58.929250 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c718816f-d85a-4401-ac91-2365bffde224" (UID: "c718816f-d85a-4401-ac91-2365bffde224"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:01:58 crc kubenswrapper[5029]: I0313 21:01:58.935731 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c718816f-d85a-4401-ac91-2365bffde224" (UID: "c718816f-d85a-4401-ac91-2365bffde224"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.003996 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj5kv\" (UniqueName: \"kubernetes.io/projected/c718816f-d85a-4401-ac91-2365bffde224-kube-api-access-bj5kv\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.004094 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.004108 5029 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c718816f-d85a-4401-ac91-2365bffde224-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.330700 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" event={"ID":"c718816f-d85a-4401-ac91-2365bffde224","Type":"ContainerDied","Data":"8d981533796023be7d06a836adc4a97510b0f9cd3deddae77f570d7d4ca00895"} Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.330767 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d981533796023be7d06a836adc4a97510b0f9cd3deddae77f570d7d4ca00895" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.330826 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cwkds" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.409710 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm"] Mar 13 21:01:59 crc kubenswrapper[5029]: E0313 21:01:59.410194 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c718816f-d85a-4401-ac91-2365bffde224" containerName="ssh-known-hosts-edpm-deployment" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.410214 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c718816f-d85a-4401-ac91-2365bffde224" containerName="ssh-known-hosts-edpm-deployment" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.410520 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c718816f-d85a-4401-ac91-2365bffde224" containerName="ssh-known-hosts-edpm-deployment" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.411459 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.414544 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.414781 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.415555 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.415705 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.428619 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm"] Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.514882 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzmg\" (UniqueName: \"kubernetes.io/projected/b37a021c-5749-4a8c-b0ca-22cc684d3c78-kube-api-access-8kzmg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.514996 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.515440 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.599755 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:01:59 crc kubenswrapper[5029]: E0313 21:01:59.600113 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.617914 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.618087 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kzmg\" (UniqueName: \"kubernetes.io/projected/b37a021c-5749-4a8c-b0ca-22cc684d3c78-kube-api-access-8kzmg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.618281 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.623298 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.623795 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.637470 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kzmg\" (UniqueName: \"kubernetes.io/projected/b37a021c-5749-4a8c-b0ca-22cc684d3c78-kube-api-access-8kzmg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5qqtm\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:01:59 crc kubenswrapper[5029]: I0313 21:01:59.729214 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.156923 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557262-jvdhx"] Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.159990 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.162931 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.167019 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.169450 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.176874 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557262-jvdhx"] Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.242895 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7r7f\" (UniqueName: \"kubernetes.io/projected/9c211696-0d5f-416b-8fff-e0294fa5542a-kube-api-access-l7r7f\") pod \"auto-csr-approver-29557262-jvdhx\" (UID: \"9c211696-0d5f-416b-8fff-e0294fa5542a\") " pod="openshift-infra/auto-csr-approver-29557262-jvdhx" Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.286688 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm"] Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.339137 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" event={"ID":"b37a021c-5749-4a8c-b0ca-22cc684d3c78","Type":"ContainerStarted","Data":"051c5adc07dc0835008a11cad1684dd5faff0988fa29ee014ea495160a20a342"} Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.344445 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7r7f\" (UniqueName: \"kubernetes.io/projected/9c211696-0d5f-416b-8fff-e0294fa5542a-kube-api-access-l7r7f\") pod \"auto-csr-approver-29557262-jvdhx\" (UID: \"9c211696-0d5f-416b-8fff-e0294fa5542a\") " pod="openshift-infra/auto-csr-approver-29557262-jvdhx" Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.363890 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7r7f\" (UniqueName: \"kubernetes.io/projected/9c211696-0d5f-416b-8fff-e0294fa5542a-kube-api-access-l7r7f\") pod \"auto-csr-approver-29557262-jvdhx\" (UID: \"9c211696-0d5f-416b-8fff-e0294fa5542a\") " pod="openshift-infra/auto-csr-approver-29557262-jvdhx" Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.493486 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" Mar 13 21:02:00 crc kubenswrapper[5029]: W0313 21:02:00.949268 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c211696_0d5f_416b_8fff_e0294fa5542a.slice/crio-e45b77f665faf01c1df5e77749ef3929cb3bc385137f115ee76a1490423860d1 WatchSource:0}: Error finding container e45b77f665faf01c1df5e77749ef3929cb3bc385137f115ee76a1490423860d1: Status 404 returned error can't find the container with id e45b77f665faf01c1df5e77749ef3929cb3bc385137f115ee76a1490423860d1 Mar 13 21:02:00 crc kubenswrapper[5029]: I0313 21:02:00.949389 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557262-jvdhx"] Mar 13 21:02:01 crc kubenswrapper[5029]: I0313 21:02:01.350694 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" event={"ID":"b37a021c-5749-4a8c-b0ca-22cc684d3c78","Type":"ContainerStarted","Data":"a929f3b287e018a28124342a186666b2cd632d69cfa30c0fa2049b0847705952"} Mar 13 21:02:01 crc kubenswrapper[5029]: I0313 21:02:01.352880 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" event={"ID":"9c211696-0d5f-416b-8fff-e0294fa5542a","Type":"ContainerStarted","Data":"e45b77f665faf01c1df5e77749ef3929cb3bc385137f115ee76a1490423860d1"} Mar 13 21:02:01 crc kubenswrapper[5029]: I0313 21:02:01.375345 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" podStartSLOduration=1.737348222 podStartE2EDuration="2.375319472s" podCreationTimestamp="2026-03-13 21:01:59 +0000 UTC" firstStartedPulling="2026-03-13 21:02:00.290368406 +0000 UTC m=+2080.306450809" lastFinishedPulling="2026-03-13 21:02:00.928339656 +0000 UTC m=+2080.944422059" observedRunningTime="2026-03-13 21:02:01.370527361 +0000 UTC m=+2081.386609784" watchObservedRunningTime="2026-03-13 21:02:01.375319472 +0000 UTC m=+2081.391401875" Mar 13 21:02:02 crc kubenswrapper[5029]: I0313 21:02:02.366110 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" event={"ID":"9c211696-0d5f-416b-8fff-e0294fa5542a","Type":"ContainerStarted","Data":"5522695e3c601c3fa50856011a32bb1c8d85c4a3b50585386e29cd6eaf955657"} Mar 13 21:02:02 crc kubenswrapper[5029]: I0313 21:02:02.389170 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" podStartSLOduration=1.458694087 podStartE2EDuration="2.389148715s" podCreationTimestamp="2026-03-13 21:02:00 +0000 UTC" firstStartedPulling="2026-03-13 21:02:00.952007512 +0000 UTC m=+2080.968089915" lastFinishedPulling="2026-03-13 21:02:01.88246214 +0000 UTC m=+2081.898544543" observedRunningTime="2026-03-13 21:02:02.382887484 +0000 UTC m=+2082.398969897" watchObservedRunningTime="2026-03-13 21:02:02.389148715 +0000 UTC m=+2082.405231118" Mar 13 21:02:03 crc kubenswrapper[5029]: I0313 21:02:03.382525 5029 generic.go:334] "Generic (PLEG): container finished" podID="9c211696-0d5f-416b-8fff-e0294fa5542a" containerID="5522695e3c601c3fa50856011a32bb1c8d85c4a3b50585386e29cd6eaf955657" exitCode=0 Mar 13 21:02:03 crc kubenswrapper[5029]: I0313 21:02:03.383008 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" event={"ID":"9c211696-0d5f-416b-8fff-e0294fa5542a","Type":"ContainerDied","Data":"5522695e3c601c3fa50856011a32bb1c8d85c4a3b50585386e29cd6eaf955657"} Mar 13 21:02:04 crc kubenswrapper[5029]: I0313 21:02:04.740781 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" Mar 13 21:02:04 crc kubenswrapper[5029]: I0313 21:02:04.853318 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7r7f\" (UniqueName: \"kubernetes.io/projected/9c211696-0d5f-416b-8fff-e0294fa5542a-kube-api-access-l7r7f\") pod \"9c211696-0d5f-416b-8fff-e0294fa5542a\" (UID: \"9c211696-0d5f-416b-8fff-e0294fa5542a\") " Mar 13 21:02:04 crc kubenswrapper[5029]: I0313 21:02:04.859281 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c211696-0d5f-416b-8fff-e0294fa5542a-kube-api-access-l7r7f" (OuterVolumeSpecName: "kube-api-access-l7r7f") pod "9c211696-0d5f-416b-8fff-e0294fa5542a" (UID: "9c211696-0d5f-416b-8fff-e0294fa5542a"). InnerVolumeSpecName "kube-api-access-l7r7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:02:04 crc kubenswrapper[5029]: I0313 21:02:04.957071 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7r7f\" (UniqueName: \"kubernetes.io/projected/9c211696-0d5f-416b-8fff-e0294fa5542a-kube-api-access-l7r7f\") on node \"crc\" DevicePath \"\"" Mar 13 21:02:05 crc kubenswrapper[5029]: I0313 21:02:05.413095 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" event={"ID":"9c211696-0d5f-416b-8fff-e0294fa5542a","Type":"ContainerDied","Data":"e45b77f665faf01c1df5e77749ef3929cb3bc385137f115ee76a1490423860d1"} Mar 13 21:02:05 crc kubenswrapper[5029]: I0313 21:02:05.413769 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45b77f665faf01c1df5e77749ef3929cb3bc385137f115ee76a1490423860d1" Mar 13 21:02:05 crc kubenswrapper[5029]: I0313 21:02:05.414295 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557262-jvdhx" Mar 13 21:02:05 crc kubenswrapper[5029]: I0313 21:02:05.475059 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557256-848ts"] Mar 13 21:02:05 crc kubenswrapper[5029]: I0313 21:02:05.485595 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557256-848ts"] Mar 13 21:02:06 crc kubenswrapper[5029]: I0313 21:02:06.613874 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6e3f8b-d4fc-495b-b76d-a97dc036b482" path="/var/lib/kubelet/pods/4c6e3f8b-d4fc-495b-b76d-a97dc036b482/volumes" Mar 13 21:02:09 crc kubenswrapper[5029]: I0313 21:02:09.454500 5029 generic.go:334] "Generic (PLEG): container finished" podID="b37a021c-5749-4a8c-b0ca-22cc684d3c78" containerID="a929f3b287e018a28124342a186666b2cd632d69cfa30c0fa2049b0847705952" exitCode=0 Mar 13 21:02:09 crc kubenswrapper[5029]: I0313 21:02:09.454590 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" event={"ID":"b37a021c-5749-4a8c-b0ca-22cc684d3c78","Type":"ContainerDied","Data":"a929f3b287e018a28124342a186666b2cd632d69cfa30c0fa2049b0847705952"} Mar 13 21:02:10 crc kubenswrapper[5029]: I0313 21:02:10.903554 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:02:10 crc kubenswrapper[5029]: I0313 21:02:10.993414 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-ssh-key-openstack-edpm-ipam\") pod \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " Mar 13 21:02:10 crc kubenswrapper[5029]: I0313 21:02:10.993580 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-inventory\") pod \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " Mar 13 21:02:10 crc kubenswrapper[5029]: I0313 21:02:10.993747 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kzmg\" (UniqueName: \"kubernetes.io/projected/b37a021c-5749-4a8c-b0ca-22cc684d3c78-kube-api-access-8kzmg\") pod \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\" (UID: \"b37a021c-5749-4a8c-b0ca-22cc684d3c78\") " Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.011938 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37a021c-5749-4a8c-b0ca-22cc684d3c78-kube-api-access-8kzmg" (OuterVolumeSpecName: "kube-api-access-8kzmg") pod "b37a021c-5749-4a8c-b0ca-22cc684d3c78" (UID: "b37a021c-5749-4a8c-b0ca-22cc684d3c78"). InnerVolumeSpecName "kube-api-access-8kzmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.025365 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-inventory" (OuterVolumeSpecName: "inventory") pod "b37a021c-5749-4a8c-b0ca-22cc684d3c78" (UID: "b37a021c-5749-4a8c-b0ca-22cc684d3c78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.046288 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b37a021c-5749-4a8c-b0ca-22cc684d3c78" (UID: "b37a021c-5749-4a8c-b0ca-22cc684d3c78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.096573 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.096612 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37a021c-5749-4a8c-b0ca-22cc684d3c78-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.096622 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kzmg\" (UniqueName: \"kubernetes.io/projected/b37a021c-5749-4a8c-b0ca-22cc684d3c78-kube-api-access-8kzmg\") on node \"crc\" DevicePath \"\"" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.475627 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" event={"ID":"b37a021c-5749-4a8c-b0ca-22cc684d3c78","Type":"ContainerDied","Data":"051c5adc07dc0835008a11cad1684dd5faff0988fa29ee014ea495160a20a342"} Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.475679 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="051c5adc07dc0835008a11cad1684dd5faff0988fa29ee014ea495160a20a342" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.475722 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5qqtm" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.704883 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29"] Mar 13 21:02:11 crc kubenswrapper[5029]: E0313 21:02:11.705684 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c211696-0d5f-416b-8fff-e0294fa5542a" containerName="oc" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.705705 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c211696-0d5f-416b-8fff-e0294fa5542a" containerName="oc" Mar 13 21:02:11 crc kubenswrapper[5029]: E0313 21:02:11.705729 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37a021c-5749-4a8c-b0ca-22cc684d3c78" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.705739 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37a021c-5749-4a8c-b0ca-22cc684d3c78" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.705954 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c211696-0d5f-416b-8fff-e0294fa5542a" containerName="oc" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.705975 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37a021c-5749-4a8c-b0ca-22cc684d3c78" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.706715 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.710525 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.711326 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.712538 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.718404 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.757303 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29"] Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.763034 5029 scope.go:117] "RemoveContainer" containerID="bdd7ae829b6bf11f0af1e9614cb5e88867ce06c334b9202daaa081992a80bdea" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.814131 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.814213 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sl28\" (UniqueName: \"kubernetes.io/projected/e2ccac21-2f90-4c85-aaf5-edd2adb44957-kube-api-access-7sl28\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.814265 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.819502 5029 scope.go:117] "RemoveContainer" containerID="b06e3cf6da382dfe5b77c9b04c7e63c9d3c18f05bec4f9e6527a596c99b57745" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.897982 5029 scope.go:117] "RemoveContainer" containerID="40fecae17729b30acfe9ac26f9d5aa494dfd0ac15b787ab6f3d6f3a5a46a741f" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.916306 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sl28\" (UniqueName: \"kubernetes.io/projected/e2ccac21-2f90-4c85-aaf5-edd2adb44957-kube-api-access-7sl28\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.916378 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.916543 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.921162 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.921535 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:11 crc kubenswrapper[5029]: I0313 21:02:11.933718 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sl28\" (UniqueName: \"kubernetes.io/projected/e2ccac21-2f90-4c85-aaf5-edd2adb44957-kube-api-access-7sl28\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-22m29\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:12 crc kubenswrapper[5029]: I0313 21:02:12.030903 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:12 crc kubenswrapper[5029]: I0313 21:02:12.546792 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29"] Mar 13 21:02:12 crc kubenswrapper[5029]: I0313 21:02:12.601320 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:02:13 crc kubenswrapper[5029]: I0313 21:02:13.495543 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" event={"ID":"e2ccac21-2f90-4c85-aaf5-edd2adb44957","Type":"ContainerStarted","Data":"7d3b7e09d73d1d1a8409e7a2307b5b343837652e2a4311292fdb0c462542103a"} Mar 13 21:02:13 crc kubenswrapper[5029]: I0313 21:02:13.500035 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"b3472f16590166b79f46f34c6217c66c7d8b48ea3fca5ec24ca6412baf78724c"} Mar 13 21:02:14 crc kubenswrapper[5029]: I0313 21:02:14.511517 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" event={"ID":"e2ccac21-2f90-4c85-aaf5-edd2adb44957","Type":"ContainerStarted","Data":"3fd16837d6aa563ee0932e15c667c6f0a34175953ec481f19ae6904c6cf765c9"} Mar 13 21:02:16 crc kubenswrapper[5029]: I0313 21:02:16.037361 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" podStartSLOduration=4.172056517 podStartE2EDuration="5.037340444s" podCreationTimestamp="2026-03-13 21:02:11 +0000 UTC" firstStartedPulling="2026-03-13 21:02:12.550707138 +0000 UTC m=+2092.566789532" lastFinishedPulling="2026-03-13 21:02:13.415991056 +0000 UTC m=+2093.432073459" observedRunningTime="2026-03-13 21:02:14.534116878 +0000 UTC m=+2094.550199301" watchObservedRunningTime="2026-03-13 21:02:16.037340444 +0000 UTC m=+2096.053422837" Mar 13 21:02:16 crc kubenswrapper[5029]: I0313 21:02:16.049168 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-cgpjg"] Mar 13 21:02:16 crc kubenswrapper[5029]: I0313 21:02:16.060821 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-cgpjg"] Mar 13 21:02:16 crc kubenswrapper[5029]: I0313 21:02:16.612745 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b351b861-896b-4e82-8636-23800ab0c89c" path="/var/lib/kubelet/pods/b351b861-896b-4e82-8636-23800ab0c89c/volumes" Mar 13 21:02:22 crc kubenswrapper[5029]: I0313 21:02:22.584646 5029 generic.go:334] "Generic (PLEG): container finished" podID="e2ccac21-2f90-4c85-aaf5-edd2adb44957" containerID="3fd16837d6aa563ee0932e15c667c6f0a34175953ec481f19ae6904c6cf765c9" exitCode=0 Mar 13 21:02:22 crc kubenswrapper[5029]: I0313 21:02:22.584740 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" event={"ID":"e2ccac21-2f90-4c85-aaf5-edd2adb44957","Type":"ContainerDied","Data":"3fd16837d6aa563ee0932e15c667c6f0a34175953ec481f19ae6904c6cf765c9"} Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.019087 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.081287 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sl28\" (UniqueName: \"kubernetes.io/projected/e2ccac21-2f90-4c85-aaf5-edd2adb44957-kube-api-access-7sl28\") pod \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.081764 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-ssh-key-openstack-edpm-ipam\") pod \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.081793 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-inventory\") pod \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\" (UID: \"e2ccac21-2f90-4c85-aaf5-edd2adb44957\") " Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.102205 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ccac21-2f90-4c85-aaf5-edd2adb44957-kube-api-access-7sl28" (OuterVolumeSpecName: "kube-api-access-7sl28") pod "e2ccac21-2f90-4c85-aaf5-edd2adb44957" (UID: "e2ccac21-2f90-4c85-aaf5-edd2adb44957"). InnerVolumeSpecName "kube-api-access-7sl28". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.114106 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-inventory" (OuterVolumeSpecName: "inventory") pod "e2ccac21-2f90-4c85-aaf5-edd2adb44957" (UID: "e2ccac21-2f90-4c85-aaf5-edd2adb44957"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.115603 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e2ccac21-2f90-4c85-aaf5-edd2adb44957" (UID: "e2ccac21-2f90-4c85-aaf5-edd2adb44957"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.184407 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sl28\" (UniqueName: \"kubernetes.io/projected/e2ccac21-2f90-4c85-aaf5-edd2adb44957-kube-api-access-7sl28\") on node \"crc\" DevicePath \"\"" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.184454 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.184473 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ccac21-2f90-4c85-aaf5-edd2adb44957-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.604759 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.612901 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-22m29" event={"ID":"e2ccac21-2f90-4c85-aaf5-edd2adb44957","Type":"ContainerDied","Data":"7d3b7e09d73d1d1a8409e7a2307b5b343837652e2a4311292fdb0c462542103a"} Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.612953 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d3b7e09d73d1d1a8409e7a2307b5b343837652e2a4311292fdb0c462542103a" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.706056 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9"] Mar 13 21:02:24 crc kubenswrapper[5029]: E0313 21:02:24.706747 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ccac21-2f90-4c85-aaf5-edd2adb44957" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.706769 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ccac21-2f90-4c85-aaf5-edd2adb44957" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.707007 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ccac21-2f90-4c85-aaf5-edd2adb44957" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.707981 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.710213 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.713291 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.713555 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.713801 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.714016 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.714187 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.714349 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.715158 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.718941 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9"] Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798183 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798293 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798348 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798394 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798475 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798500 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798682 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798769 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.798981 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.799046 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.799077 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlnrv\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-kube-api-access-rlnrv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.799109 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.799134 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.799155 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.900872 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.900952 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.900976 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlnrv\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-kube-api-access-rlnrv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901002 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901025 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901041 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901066 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901087 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901119 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901157 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901201 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901222 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901255 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.901278 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.906391 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.906748 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.906930 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.907278 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.908007 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.908532 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.909417 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.909529 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.909952 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.910486 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.910682 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.911000 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.911681 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:24 crc kubenswrapper[5029]: I0313 21:02:24.917829 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlnrv\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-kube-api-access-rlnrv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:25 crc kubenswrapper[5029]: I0313 21:02:25.028563 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:02:25 crc kubenswrapper[5029]: I0313 21:02:25.531508 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9"] Mar 13 21:02:25 crc kubenswrapper[5029]: W0313 21:02:25.532535 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb4e8811_7f7d_4e55_adc2_d75f2c5c007a.slice/crio-5beb6bc8437b0fa56618d8d18a5a786b4be1382886cb3fde6dae7fbb54cc5ef0 WatchSource:0}: Error finding container 5beb6bc8437b0fa56618d8d18a5a786b4be1382886cb3fde6dae7fbb54cc5ef0: Status 404 returned error can't find the container with id 5beb6bc8437b0fa56618d8d18a5a786b4be1382886cb3fde6dae7fbb54cc5ef0 Mar 13 21:02:25 crc kubenswrapper[5029]: I0313 21:02:25.616534 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" event={"ID":"db4e8811-7f7d-4e55-adc2-d75f2c5c007a","Type":"ContainerStarted","Data":"5beb6bc8437b0fa56618d8d18a5a786b4be1382886cb3fde6dae7fbb54cc5ef0"} Mar 13 21:02:26 crc kubenswrapper[5029]: I0313 21:02:26.628121 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" event={"ID":"db4e8811-7f7d-4e55-adc2-d75f2c5c007a","Type":"ContainerStarted","Data":"c9229be7c44d268767e56dcb9938313b9eb94976e9a6ac68abb180b2c9455ee3"} Mar 13 21:03:06 crc kubenswrapper[5029]: I0313 21:03:06.067320 5029 generic.go:334] "Generic (PLEG): container finished" podID="db4e8811-7f7d-4e55-adc2-d75f2c5c007a" containerID="c9229be7c44d268767e56dcb9938313b9eb94976e9a6ac68abb180b2c9455ee3" exitCode=0 Mar 13 21:03:06 crc kubenswrapper[5029]: I0313 21:03:06.067401 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" event={"ID":"db4e8811-7f7d-4e55-adc2-d75f2c5c007a","Type":"ContainerDied","Data":"c9229be7c44d268767e56dcb9938313b9eb94976e9a6ac68abb180b2c9455ee3"} Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.527018 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.604297 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ssh-key-openstack-edpm-ipam\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.604734 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.604785 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-inventory\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.604814 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ovn-combined-ca-bundle\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.604895 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-neutron-metadata-combined-ca-bundle\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.604954 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-bootstrap-combined-ca-bundle\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.605058 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.605107 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-libvirt-combined-ca-bundle\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.605141 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.605186 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-nova-combined-ca-bundle\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.605225 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.605268 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-telemetry-combined-ca-bundle\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.605324 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-repo-setup-combined-ca-bundle\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.605444 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlnrv\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-kube-api-access-rlnrv\") pod \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\" (UID: \"db4e8811-7f7d-4e55-adc2-d75f2c5c007a\") " Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.611734 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.611790 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.613327 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.613540 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.613743 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.614756 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.614975 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.615138 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.616550 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.616944 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.627314 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.627722 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-kube-api-access-rlnrv" (OuterVolumeSpecName: "kube-api-access-rlnrv") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "kube-api-access-rlnrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.642784 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-inventory" (OuterVolumeSpecName: "inventory") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.644090 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "db4e8811-7f7d-4e55-adc2-d75f2c5c007a" (UID: "db4e8811-7f7d-4e55-adc2-d75f2c5c007a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709015 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709074 5029 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709089 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709099 5029 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709110 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709121 5029 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709133 5029 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709144 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlnrv\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-kube-api-access-rlnrv\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709154 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709164 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709177 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709187 5029 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709201 5029 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:07 crc kubenswrapper[5029]: I0313 21:03:07.709210 5029 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4e8811-7f7d-4e55-adc2-d75f2c5c007a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.087639 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" event={"ID":"db4e8811-7f7d-4e55-adc2-d75f2c5c007a","Type":"ContainerDied","Data":"5beb6bc8437b0fa56618d8d18a5a786b4be1382886cb3fde6dae7fbb54cc5ef0"} Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.087679 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5beb6bc8437b0fa56618d8d18a5a786b4be1382886cb3fde6dae7fbb54cc5ef0" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.087701 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.336400 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb"] Mar 13 21:03:08 crc kubenswrapper[5029]: E0313 21:03:08.336971 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4e8811-7f7d-4e55-adc2-d75f2c5c007a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.336994 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4e8811-7f7d-4e55-adc2-d75f2c5c007a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.337262 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4e8811-7f7d-4e55-adc2-d75f2c5c007a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.338045 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.340455 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.340567 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.340810 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.340941 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.343808 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.352987 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb"] Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.426945 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.427057 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2hm\" (UniqueName: \"kubernetes.io/projected/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-kube-api-access-9z2hm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.427110 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.427182 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.427382 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.529933 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.530014 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z2hm\" (UniqueName: \"kubernetes.io/projected/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-kube-api-access-9z2hm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.530050 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.530071 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.530134 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.531705 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.534328 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.534769 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.535304 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.549832 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z2hm\" (UniqueName: \"kubernetes.io/projected/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-kube-api-access-9z2hm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-q87rb\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:08 crc kubenswrapper[5029]: I0313 21:03:08.657830 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:03:09 crc kubenswrapper[5029]: I0313 21:03:09.199881 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb"] Mar 13 21:03:10 crc kubenswrapper[5029]: I0313 21:03:10.119599 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" event={"ID":"f8235dbd-1bae-4cce-a053-03f7c07d6ce7","Type":"ContainerStarted","Data":"c3d941ba9d385177ecf5bb7ea37706799a12006373f7eceafe7e3257320f2baf"} Mar 13 21:03:10 crc kubenswrapper[5029]: I0313 21:03:10.120026 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" event={"ID":"f8235dbd-1bae-4cce-a053-03f7c07d6ce7","Type":"ContainerStarted","Data":"e31fbafaf7d64ee71eaaacefa1d82827f58aa21973876b7287c1a58481019cb6"} Mar 13 21:03:10 crc kubenswrapper[5029]: I0313 21:03:10.150448 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" podStartSLOduration=1.6816790419999998 podStartE2EDuration="2.150406511s" podCreationTimestamp="2026-03-13 21:03:08 +0000 UTC" firstStartedPulling="2026-03-13 21:03:09.211840762 +0000 UTC m=+2149.227923165" lastFinishedPulling="2026-03-13 21:03:09.680568231 +0000 UTC m=+2149.696650634" observedRunningTime="2026-03-13 21:03:10.14449828 +0000 UTC m=+2150.160580723" watchObservedRunningTime="2026-03-13 21:03:10.150406511 +0000 UTC m=+2150.166488914" Mar 13 21:03:12 crc kubenswrapper[5029]: I0313 21:03:12.011138 5029 scope.go:117] "RemoveContainer" containerID="3496b61b9e2ef305e95c3360c42df3a18cb712cb78044cc0aa51450fea1e4444" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.121778 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vh5rw"] Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.124477 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.137371 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh5rw"] Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.172135 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj2sc\" (UniqueName: \"kubernetes.io/projected/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-kube-api-access-jj2sc\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.172508 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-catalog-content\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.172576 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-utilities\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.274843 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj2sc\" (UniqueName: \"kubernetes.io/projected/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-kube-api-access-jj2sc\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.275241 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-catalog-content\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.275276 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-utilities\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.275988 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-utilities\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.275986 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-catalog-content\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.295971 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj2sc\" (UniqueName: \"kubernetes.io/projected/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-kube-api-access-jj2sc\") pod \"redhat-operators-vh5rw\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.446809 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:25 crc kubenswrapper[5029]: I0313 21:03:25.944002 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh5rw"] Mar 13 21:03:26 crc kubenswrapper[5029]: I0313 21:03:26.266200 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5rw" event={"ID":"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779","Type":"ContainerStarted","Data":"45cc4b2d41b52e70dfa6e6687ab7cf17ead0cf156bb2c05157fb684b7df8a659"} Mar 13 21:03:27 crc kubenswrapper[5029]: I0313 21:03:27.279505 5029 generic.go:334] "Generic (PLEG): container finished" podID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerID="675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b" exitCode=0 Mar 13 21:03:27 crc kubenswrapper[5029]: I0313 21:03:27.279616 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5rw" event={"ID":"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779","Type":"ContainerDied","Data":"675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b"} Mar 13 21:03:28 crc kubenswrapper[5029]: I0313 21:03:28.293490 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5rw" event={"ID":"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779","Type":"ContainerStarted","Data":"99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c"} Mar 13 21:03:33 crc kubenswrapper[5029]: I0313 21:03:33.357493 5029 generic.go:334] "Generic (PLEG): container finished" podID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerID="99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c" exitCode=0 Mar 13 21:03:33 crc kubenswrapper[5029]: I0313 21:03:33.357585 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5rw" event={"ID":"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779","Type":"ContainerDied","Data":"99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c"} Mar 13 21:03:34 crc kubenswrapper[5029]: I0313 21:03:34.370807 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5rw" event={"ID":"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779","Type":"ContainerStarted","Data":"d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa"} Mar 13 21:03:34 crc kubenswrapper[5029]: I0313 21:03:34.395183 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vh5rw" podStartSLOduration=2.881643841 podStartE2EDuration="9.395157871s" podCreationTimestamp="2026-03-13 21:03:25 +0000 UTC" firstStartedPulling="2026-03-13 21:03:27.282415839 +0000 UTC m=+2167.298498242" lastFinishedPulling="2026-03-13 21:03:33.795929869 +0000 UTC m=+2173.812012272" observedRunningTime="2026-03-13 21:03:34.388264473 +0000 UTC m=+2174.404346906" watchObservedRunningTime="2026-03-13 21:03:34.395157871 +0000 UTC m=+2174.411240274" Mar 13 21:03:35 crc kubenswrapper[5029]: I0313 21:03:35.447408 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:35 crc kubenswrapper[5029]: I0313 21:03:35.447507 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:36 crc kubenswrapper[5029]: I0313 21:03:36.498564 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh5rw" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="registry-server" probeResult="failure" output=< Mar 13 21:03:36 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 21:03:36 crc kubenswrapper[5029]: > Mar 13 21:03:46 crc kubenswrapper[5029]: I0313 21:03:46.498441 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh5rw" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="registry-server" probeResult="failure" output=< Mar 13 21:03:46 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 21:03:46 crc kubenswrapper[5029]: > Mar 13 21:03:55 crc kubenswrapper[5029]: I0313 21:03:55.505772 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:55 crc kubenswrapper[5029]: I0313 21:03:55.563029 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:56 crc kubenswrapper[5029]: I0313 21:03:56.326148 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh5rw"] Mar 13 21:03:56 crc kubenswrapper[5029]: I0313 21:03:56.592748 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vh5rw" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="registry-server" containerID="cri-o://d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa" gracePeriod=2 Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.065634 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.121589 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-catalog-content\") pod \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.121764 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj2sc\" (UniqueName: \"kubernetes.io/projected/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-kube-api-access-jj2sc\") pod \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.121809 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-utilities\") pod \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\" (UID: \"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779\") " Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.122975 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-utilities" (OuterVolumeSpecName: "utilities") pod "a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" (UID: "a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.131656 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-kube-api-access-jj2sc" (OuterVolumeSpecName: "kube-api-access-jj2sc") pod "a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" (UID: "a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779"). InnerVolumeSpecName "kube-api-access-jj2sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.224793 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj2sc\" (UniqueName: \"kubernetes.io/projected/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-kube-api-access-jj2sc\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.224835 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.272358 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" (UID: "a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.327215 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.605988 5029 generic.go:334] "Generic (PLEG): container finished" podID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerID="d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa" exitCode=0 Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.606035 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5rw" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.606069 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5rw" event={"ID":"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779","Type":"ContainerDied","Data":"d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa"} Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.606465 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5rw" event={"ID":"a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779","Type":"ContainerDied","Data":"45cc4b2d41b52e70dfa6e6687ab7cf17ead0cf156bb2c05157fb684b7df8a659"} Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.606529 5029 scope.go:117] "RemoveContainer" containerID="d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.635127 5029 scope.go:117] "RemoveContainer" containerID="99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.653359 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh5rw"] Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.663895 5029 scope.go:117] "RemoveContainer" containerID="675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.663904 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vh5rw"] Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.709869 5029 scope.go:117] "RemoveContainer" containerID="d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa" Mar 13 21:03:57 crc kubenswrapper[5029]: E0313 21:03:57.710545 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa\": container with ID starting with d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa not found: ID does not exist" containerID="d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.710613 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa"} err="failed to get container status \"d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa\": rpc error: code = NotFound desc = could not find container \"d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa\": container with ID starting with d9bb8ca3588f2e1447d1bb2e6d414fc11236e853eace9b0d5e94c886fb16d0aa not found: ID does not exist" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.710651 5029 scope.go:117] "RemoveContainer" containerID="99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c" Mar 13 21:03:57 crc kubenswrapper[5029]: E0313 21:03:57.711143 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c\": container with ID starting with 99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c not found: ID does not exist" containerID="99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.711181 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c"} err="failed to get container status \"99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c\": rpc error: code = NotFound desc = could not find container \"99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c\": container with ID starting with 99d0232ed3919c4249d79ca2b84b04dde503a51e64685b70b064921aaf15765c not found: ID does not exist" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.711219 5029 scope.go:117] "RemoveContainer" containerID="675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b" Mar 13 21:03:57 crc kubenswrapper[5029]: E0313 21:03:57.711525 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b\": container with ID starting with 675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b not found: ID does not exist" containerID="675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b" Mar 13 21:03:57 crc kubenswrapper[5029]: I0313 21:03:57.711559 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b"} err="failed to get container status \"675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b\": rpc error: code = NotFound desc = could not find container \"675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b\": container with ID starting with 675efdbc9149e9c0d62ac063f7c7191f5547abd4c03fa87310d2296795f8cb1b not found: ID does not exist" Mar 13 21:03:58 crc kubenswrapper[5029]: I0313 21:03:58.612678 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" path="/var/lib/kubelet/pods/a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779/volumes" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.162024 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557264-wrd8g"] Mar 13 21:04:00 crc kubenswrapper[5029]: E0313 21:04:00.163584 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="extract-utilities" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.163613 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="extract-utilities" Mar 13 21:04:00 crc kubenswrapper[5029]: E0313 21:04:00.163673 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="extract-content" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.163683 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="extract-content" Mar 13 21:04:00 crc kubenswrapper[5029]: E0313 21:04:00.163705 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="registry-server" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.163715 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="registry-server" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.164399 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c0fa2d-6fb5-4ea2-8c0d-37b6088a9779" containerName="registry-server" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.165843 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557264-wrd8g" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.172886 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.176452 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.176907 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.192363 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2dnk\" (UniqueName: \"kubernetes.io/projected/2274a9fc-5569-4924-8713-c048d72509bf-kube-api-access-g2dnk\") pod \"auto-csr-approver-29557264-wrd8g\" (UID: \"2274a9fc-5569-4924-8713-c048d72509bf\") " pod="openshift-infra/auto-csr-approver-29557264-wrd8g" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.204986 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557264-wrd8g"] Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.294843 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2dnk\" (UniqueName: \"kubernetes.io/projected/2274a9fc-5569-4924-8713-c048d72509bf-kube-api-access-g2dnk\") pod \"auto-csr-approver-29557264-wrd8g\" (UID: \"2274a9fc-5569-4924-8713-c048d72509bf\") " pod="openshift-infra/auto-csr-approver-29557264-wrd8g" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.315401 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2dnk\" (UniqueName: \"kubernetes.io/projected/2274a9fc-5569-4924-8713-c048d72509bf-kube-api-access-g2dnk\") pod \"auto-csr-approver-29557264-wrd8g\" (UID: \"2274a9fc-5569-4924-8713-c048d72509bf\") " pod="openshift-infra/auto-csr-approver-29557264-wrd8g" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.495091 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557264-wrd8g" Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.929665 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557264-wrd8g"] Mar 13 21:04:00 crc kubenswrapper[5029]: I0313 21:04:00.934169 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:04:01 crc kubenswrapper[5029]: I0313 21:04:01.650249 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557264-wrd8g" event={"ID":"2274a9fc-5569-4924-8713-c048d72509bf","Type":"ContainerStarted","Data":"380ffe569b41c47119d5a9f43b8239a4f38c867e26d6efb7f963de88af79634b"} Mar 13 21:04:02 crc kubenswrapper[5029]: I0313 21:04:02.659669 5029 generic.go:334] "Generic (PLEG): container finished" podID="2274a9fc-5569-4924-8713-c048d72509bf" containerID="4a4c5fc6e4dcc19dde1dc505cfc6ce8b19e8eaf0ce37294a9b0248bd3bc0ab8e" exitCode=0 Mar 13 21:04:02 crc kubenswrapper[5029]: I0313 21:04:02.659827 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557264-wrd8g" event={"ID":"2274a9fc-5569-4924-8713-c048d72509bf","Type":"ContainerDied","Data":"4a4c5fc6e4dcc19dde1dc505cfc6ce8b19e8eaf0ce37294a9b0248bd3bc0ab8e"} Mar 13 21:04:04 crc kubenswrapper[5029]: I0313 21:04:04.007084 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557264-wrd8g" Mar 13 21:04:04 crc kubenswrapper[5029]: I0313 21:04:04.169919 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2dnk\" (UniqueName: \"kubernetes.io/projected/2274a9fc-5569-4924-8713-c048d72509bf-kube-api-access-g2dnk\") pod \"2274a9fc-5569-4924-8713-c048d72509bf\" (UID: \"2274a9fc-5569-4924-8713-c048d72509bf\") " Mar 13 21:04:04 crc kubenswrapper[5029]: I0313 21:04:04.176278 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2274a9fc-5569-4924-8713-c048d72509bf-kube-api-access-g2dnk" (OuterVolumeSpecName: "kube-api-access-g2dnk") pod "2274a9fc-5569-4924-8713-c048d72509bf" (UID: "2274a9fc-5569-4924-8713-c048d72509bf"). InnerVolumeSpecName "kube-api-access-g2dnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:04:04 crc kubenswrapper[5029]: I0313 21:04:04.272427 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2dnk\" (UniqueName: \"kubernetes.io/projected/2274a9fc-5569-4924-8713-c048d72509bf-kube-api-access-g2dnk\") on node \"crc\" DevicePath \"\"" Mar 13 21:04:04 crc kubenswrapper[5029]: I0313 21:04:04.678688 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557264-wrd8g" Mar 13 21:04:04 crc kubenswrapper[5029]: I0313 21:04:04.678708 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557264-wrd8g" event={"ID":"2274a9fc-5569-4924-8713-c048d72509bf","Type":"ContainerDied","Data":"380ffe569b41c47119d5a9f43b8239a4f38c867e26d6efb7f963de88af79634b"} Mar 13 21:04:04 crc kubenswrapper[5029]: I0313 21:04:04.679127 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="380ffe569b41c47119d5a9f43b8239a4f38c867e26d6efb7f963de88af79634b" Mar 13 21:04:05 crc kubenswrapper[5029]: I0313 21:04:05.082547 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557258-454kv"] Mar 13 21:04:05 crc kubenswrapper[5029]: I0313 21:04:05.090679 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557258-454kv"] Mar 13 21:04:06 crc kubenswrapper[5029]: I0313 21:04:06.611045 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="841dfc0b-34fc-46ec-bf2a-f9578c341e92" path="/var/lib/kubelet/pods/841dfc0b-34fc-46ec-bf2a-f9578c341e92/volumes" Mar 13 21:04:12 crc kubenswrapper[5029]: I0313 21:04:12.092145 5029 scope.go:117] "RemoveContainer" containerID="bfc158d8f0cf1ce7c33de51ee6e31b7a7fd29cbe5a31ba45e663fbaf00551664" Mar 13 21:04:13 crc kubenswrapper[5029]: I0313 21:04:13.754832 5029 generic.go:334] "Generic (PLEG): container finished" podID="f8235dbd-1bae-4cce-a053-03f7c07d6ce7" containerID="c3d941ba9d385177ecf5bb7ea37706799a12006373f7eceafe7e3257320f2baf" exitCode=0 Mar 13 21:04:13 crc kubenswrapper[5029]: I0313 21:04:13.754933 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" event={"ID":"f8235dbd-1bae-4cce-a053-03f7c07d6ce7","Type":"ContainerDied","Data":"c3d941ba9d385177ecf5bb7ea37706799a12006373f7eceafe7e3257320f2baf"} Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.206976 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.303464 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovncontroller-config-0\") pod \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.303639 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovn-combined-ca-bundle\") pod \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.303710 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ssh-key-openstack-edpm-ipam\") pod \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.303809 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z2hm\" (UniqueName: \"kubernetes.io/projected/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-kube-api-access-9z2hm\") pod \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.303910 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-inventory\") pod \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\" (UID: \"f8235dbd-1bae-4cce-a053-03f7c07d6ce7\") " Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.310160 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f8235dbd-1bae-4cce-a053-03f7c07d6ce7" (UID: "f8235dbd-1bae-4cce-a053-03f7c07d6ce7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.310313 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-kube-api-access-9z2hm" (OuterVolumeSpecName: "kube-api-access-9z2hm") pod "f8235dbd-1bae-4cce-a053-03f7c07d6ce7" (UID: "f8235dbd-1bae-4cce-a053-03f7c07d6ce7"). InnerVolumeSpecName "kube-api-access-9z2hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.331179 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "f8235dbd-1bae-4cce-a053-03f7c07d6ce7" (UID: "f8235dbd-1bae-4cce-a053-03f7c07d6ce7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.332574 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8235dbd-1bae-4cce-a053-03f7c07d6ce7" (UID: "f8235dbd-1bae-4cce-a053-03f7c07d6ce7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.338250 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-inventory" (OuterVolumeSpecName: "inventory") pod "f8235dbd-1bae-4cce-a053-03f7c07d6ce7" (UID: "f8235dbd-1bae-4cce-a053-03f7c07d6ce7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.406679 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z2hm\" (UniqueName: \"kubernetes.io/projected/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-kube-api-access-9z2hm\") on node \"crc\" DevicePath \"\"" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.406719 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.406731 5029 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.406741 5029 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.406750 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8235dbd-1bae-4cce-a053-03f7c07d6ce7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.776997 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" event={"ID":"f8235dbd-1bae-4cce-a053-03f7c07d6ce7","Type":"ContainerDied","Data":"e31fbafaf7d64ee71eaaacefa1d82827f58aa21973876b7287c1a58481019cb6"} Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.777048 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e31fbafaf7d64ee71eaaacefa1d82827f58aa21973876b7287c1a58481019cb6" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.777071 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-q87rb" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.862995 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs"] Mar 13 21:04:15 crc kubenswrapper[5029]: E0313 21:04:15.863510 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8235dbd-1bae-4cce-a053-03f7c07d6ce7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.863536 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8235dbd-1bae-4cce-a053-03f7c07d6ce7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 13 21:04:15 crc kubenswrapper[5029]: E0313 21:04:15.863553 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2274a9fc-5569-4924-8713-c048d72509bf" containerName="oc" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.863560 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2274a9fc-5569-4924-8713-c048d72509bf" containerName="oc" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.863797 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8235dbd-1bae-4cce-a053-03f7c07d6ce7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.863824 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="2274a9fc-5569-4924-8713-c048d72509bf" containerName="oc" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.864603 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.866704 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.866747 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.867103 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.867178 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.867441 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.870716 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:04:15 crc kubenswrapper[5029]: I0313 21:04:15.876357 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs"] Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.020680 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.020768 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.020819 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.020912 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.021203 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mbjd\" (UniqueName: \"kubernetes.io/projected/be4091de-1faa-4cda-b53b-22c6a3b67e74-kube-api-access-4mbjd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.021447 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.123911 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.124265 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.124298 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.124348 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.124416 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mbjd\" (UniqueName: \"kubernetes.io/projected/be4091de-1faa-4cda-b53b-22c6a3b67e74-kube-api-access-4mbjd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.124453 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.128169 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.128344 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.128819 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.128837 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.129938 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.143116 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mbjd\" (UniqueName: \"kubernetes.io/projected/be4091de-1faa-4cda-b53b-22c6a3b67e74-kube-api-access-4mbjd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.183539 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.681268 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs"] Mar 13 21:04:16 crc kubenswrapper[5029]: I0313 21:04:16.789003 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" event={"ID":"be4091de-1faa-4cda-b53b-22c6a3b67e74","Type":"ContainerStarted","Data":"847c63c7a385487ccd7bfecc07806c7eb04ad797c8d2900c002b9b0544a646f6"} Mar 13 21:04:17 crc kubenswrapper[5029]: I0313 21:04:17.800792 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" event={"ID":"be4091de-1faa-4cda-b53b-22c6a3b67e74","Type":"ContainerStarted","Data":"e00bb8f2556246b5b6cd86a50e66f5e2071437dc8321bbbbfcb8bb689912c915"} Mar 13 21:04:17 crc kubenswrapper[5029]: I0313 21:04:17.822040 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" podStartSLOduration=2.105356728 podStartE2EDuration="2.8220162s" podCreationTimestamp="2026-03-13 21:04:15 +0000 UTC" firstStartedPulling="2026-03-13 21:04:16.686056548 +0000 UTC m=+2216.702138951" lastFinishedPulling="2026-03-13 21:04:17.40271601 +0000 UTC m=+2217.418798423" observedRunningTime="2026-03-13 21:04:17.819940703 +0000 UTC m=+2217.836023096" watchObservedRunningTime="2026-03-13 21:04:17.8220162 +0000 UTC m=+2217.838098593" Mar 13 21:04:31 crc kubenswrapper[5029]: I0313 21:04:31.950217 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:04:31 crc kubenswrapper[5029]: I0313 21:04:31.951277 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:05:01 crc kubenswrapper[5029]: I0313 21:05:01.955992 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:05:01 crc kubenswrapper[5029]: I0313 21:05:01.957032 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:05:06 crc kubenswrapper[5029]: I0313 21:05:06.338614 5029 generic.go:334] "Generic (PLEG): container finished" podID="be4091de-1faa-4cda-b53b-22c6a3b67e74" containerID="e00bb8f2556246b5b6cd86a50e66f5e2071437dc8321bbbbfcb8bb689912c915" exitCode=0 Mar 13 21:05:06 crc kubenswrapper[5029]: I0313 21:05:06.338722 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" event={"ID":"be4091de-1faa-4cda-b53b-22c6a3b67e74","Type":"ContainerDied","Data":"e00bb8f2556246b5b6cd86a50e66f5e2071437dc8321bbbbfcb8bb689912c915"} Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.859053 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.934470 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-inventory\") pod \"be4091de-1faa-4cda-b53b-22c6a3b67e74\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.934961 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-metadata-combined-ca-bundle\") pod \"be4091de-1faa-4cda-b53b-22c6a3b67e74\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.935088 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-nova-metadata-neutron-config-0\") pod \"be4091de-1faa-4cda-b53b-22c6a3b67e74\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.935140 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mbjd\" (UniqueName: \"kubernetes.io/projected/be4091de-1faa-4cda-b53b-22c6a3b67e74-kube-api-access-4mbjd\") pod \"be4091de-1faa-4cda-b53b-22c6a3b67e74\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.935177 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-ovn-metadata-agent-neutron-config-0\") pod \"be4091de-1faa-4cda-b53b-22c6a3b67e74\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.935212 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-ssh-key-openstack-edpm-ipam\") pod \"be4091de-1faa-4cda-b53b-22c6a3b67e74\" (UID: \"be4091de-1faa-4cda-b53b-22c6a3b67e74\") " Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.944117 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "be4091de-1faa-4cda-b53b-22c6a3b67e74" (UID: "be4091de-1faa-4cda-b53b-22c6a3b67e74"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.949231 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4091de-1faa-4cda-b53b-22c6a3b67e74-kube-api-access-4mbjd" (OuterVolumeSpecName: "kube-api-access-4mbjd") pod "be4091de-1faa-4cda-b53b-22c6a3b67e74" (UID: "be4091de-1faa-4cda-b53b-22c6a3b67e74"). InnerVolumeSpecName "kube-api-access-4mbjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.969805 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-inventory" (OuterVolumeSpecName: "inventory") pod "be4091de-1faa-4cda-b53b-22c6a3b67e74" (UID: "be4091de-1faa-4cda-b53b-22c6a3b67e74"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.970822 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "be4091de-1faa-4cda-b53b-22c6a3b67e74" (UID: "be4091de-1faa-4cda-b53b-22c6a3b67e74"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.972674 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "be4091de-1faa-4cda-b53b-22c6a3b67e74" (UID: "be4091de-1faa-4cda-b53b-22c6a3b67e74"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:05:07 crc kubenswrapper[5029]: I0313 21:05:07.982830 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "be4091de-1faa-4cda-b53b-22c6a3b67e74" (UID: "be4091de-1faa-4cda-b53b-22c6a3b67e74"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.039240 5029 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.039289 5029 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.039304 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mbjd\" (UniqueName: \"kubernetes.io/projected/be4091de-1faa-4cda-b53b-22c6a3b67e74-kube-api-access-4mbjd\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.039318 5029 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.039329 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.039340 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be4091de-1faa-4cda-b53b-22c6a3b67e74-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.363448 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" event={"ID":"be4091de-1faa-4cda-b53b-22c6a3b67e74","Type":"ContainerDied","Data":"847c63c7a385487ccd7bfecc07806c7eb04ad797c8d2900c002b9b0544a646f6"} Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.363509 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="847c63c7a385487ccd7bfecc07806c7eb04ad797c8d2900c002b9b0544a646f6" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.363598 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.546919 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5"] Mar 13 21:05:08 crc kubenswrapper[5029]: E0313 21:05:08.547828 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4091de-1faa-4cda-b53b-22c6a3b67e74" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.547874 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4091de-1faa-4cda-b53b-22c6a3b67e74" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.548116 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4091de-1faa-4cda-b53b-22c6a3b67e74" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.549896 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.555740 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.555822 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.556157 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.556434 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.556723 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.572407 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5"] Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.653559 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.654276 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl6s8\" (UniqueName: \"kubernetes.io/projected/103d724b-82ad-4507-960c-6739fa89ab17-kube-api-access-gl6s8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.654414 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.654597 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.654729 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.756623 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.756695 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6s8\" (UniqueName: \"kubernetes.io/projected/103d724b-82ad-4507-960c-6739fa89ab17-kube-api-access-gl6s8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.756719 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.756837 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.756876 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.761801 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.762603 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.763586 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.764293 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.779154 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6s8\" (UniqueName: \"kubernetes.io/projected/103d724b-82ad-4507-960c-6739fa89ab17-kube-api-access-gl6s8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:08 crc kubenswrapper[5029]: I0313 21:05:08.871223 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:05:10 crc kubenswrapper[5029]: I0313 21:05:09.438914 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5"] Mar 13 21:05:10 crc kubenswrapper[5029]: I0313 21:05:10.385730 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" event={"ID":"103d724b-82ad-4507-960c-6739fa89ab17","Type":"ContainerStarted","Data":"f2d521d74090b480181f4f413818fedcf33adc320c87e37ea994b647e0289c8e"} Mar 13 21:05:10 crc kubenswrapper[5029]: I0313 21:05:10.386629 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" event={"ID":"103d724b-82ad-4507-960c-6739fa89ab17","Type":"ContainerStarted","Data":"d99696f6e163798d1d8acd68f44150d95dc0bb4b00ac2655858faa24a3768e44"} Mar 13 21:05:10 crc kubenswrapper[5029]: I0313 21:05:10.416071 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" podStartSLOduration=1.940238658 podStartE2EDuration="2.416046614s" podCreationTimestamp="2026-03-13 21:05:08 +0000 UTC" firstStartedPulling="2026-03-13 21:05:09.427077223 +0000 UTC m=+2269.443159626" lastFinishedPulling="2026-03-13 21:05:09.902885179 +0000 UTC m=+2269.918967582" observedRunningTime="2026-03-13 21:05:10.41056482 +0000 UTC m=+2270.426647223" watchObservedRunningTime="2026-03-13 21:05:10.416046614 +0000 UTC m=+2270.432129017" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.707478 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-twzpk"] Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.710783 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.736606 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-utilities\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.737143 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-catalog-content\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.737321 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdsv\" (UniqueName: \"kubernetes.io/projected/422f4b9f-2373-4076-95f1-1b9c8c19627d-kube-api-access-pvdsv\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.742001 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzpk"] Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.839193 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-utilities\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.839297 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-catalog-content\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.839712 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-utilities\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.839814 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-catalog-content\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.839981 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvdsv\" (UniqueName: \"kubernetes.io/projected/422f4b9f-2373-4076-95f1-1b9c8c19627d-kube-api-access-pvdsv\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:18 crc kubenswrapper[5029]: I0313 21:05:18.864922 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvdsv\" (UniqueName: \"kubernetes.io/projected/422f4b9f-2373-4076-95f1-1b9c8c19627d-kube-api-access-pvdsv\") pod \"redhat-marketplace-twzpk\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:19 crc kubenswrapper[5029]: I0313 21:05:19.039791 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:19 crc kubenswrapper[5029]: I0313 21:05:19.569278 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzpk"] Mar 13 21:05:20 crc kubenswrapper[5029]: I0313 21:05:20.499079 5029 generic.go:334] "Generic (PLEG): container finished" podID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerID="117a9ca04860556e6ca1f133570daba6809c6469542291e9473106283ecad351" exitCode=0 Mar 13 21:05:20 crc kubenswrapper[5029]: I0313 21:05:20.499163 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzpk" event={"ID":"422f4b9f-2373-4076-95f1-1b9c8c19627d","Type":"ContainerDied","Data":"117a9ca04860556e6ca1f133570daba6809c6469542291e9473106283ecad351"} Mar 13 21:05:20 crc kubenswrapper[5029]: I0313 21:05:20.499425 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzpk" event={"ID":"422f4b9f-2373-4076-95f1-1b9c8c19627d","Type":"ContainerStarted","Data":"ffa49ceafa0efc0dc3d031c7305e3f6e580554b773fddcc6565eba1be1ca661f"} Mar 13 21:05:21 crc kubenswrapper[5029]: I0313 21:05:21.514339 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzpk" event={"ID":"422f4b9f-2373-4076-95f1-1b9c8c19627d","Type":"ContainerStarted","Data":"68ace0ac7586fc4494f92bfb8994f84fd140c25d86e7c102c8b6e98ae71e9d87"} Mar 13 21:05:22 crc kubenswrapper[5029]: I0313 21:05:22.524524 5029 generic.go:334] "Generic (PLEG): container finished" podID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerID="68ace0ac7586fc4494f92bfb8994f84fd140c25d86e7c102c8b6e98ae71e9d87" exitCode=0 Mar 13 21:05:22 crc kubenswrapper[5029]: I0313 21:05:22.524625 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzpk" event={"ID":"422f4b9f-2373-4076-95f1-1b9c8c19627d","Type":"ContainerDied","Data":"68ace0ac7586fc4494f92bfb8994f84fd140c25d86e7c102c8b6e98ae71e9d87"} Mar 13 21:05:23 crc kubenswrapper[5029]: I0313 21:05:23.538600 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzpk" event={"ID":"422f4b9f-2373-4076-95f1-1b9c8c19627d","Type":"ContainerStarted","Data":"b66168f331ab51333cd7c9299ada6e09f676cc011ae2deb185352bd37399272b"} Mar 13 21:05:23 crc kubenswrapper[5029]: I0313 21:05:23.565656 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-twzpk" podStartSLOduration=3.093949926 podStartE2EDuration="5.565628365s" podCreationTimestamp="2026-03-13 21:05:18 +0000 UTC" firstStartedPulling="2026-03-13 21:05:20.50124643 +0000 UTC m=+2280.517328833" lastFinishedPulling="2026-03-13 21:05:22.972924869 +0000 UTC m=+2282.989007272" observedRunningTime="2026-03-13 21:05:23.565003606 +0000 UTC m=+2283.581086029" watchObservedRunningTime="2026-03-13 21:05:23.565628365 +0000 UTC m=+2283.581710768" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.041058 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.041831 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.096063 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.359208 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dgppk"] Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.363351 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.370874 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dgppk"] Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.379314 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-catalog-content\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.379378 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7skc\" (UniqueName: \"kubernetes.io/projected/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-kube-api-access-p7skc\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.379455 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-utilities\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.481179 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-catalog-content\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.481249 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7skc\" (UniqueName: \"kubernetes.io/projected/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-kube-api-access-p7skc\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.481345 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-utilities\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.481711 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-catalog-content\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.482086 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-utilities\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.514723 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7skc\" (UniqueName: \"kubernetes.io/projected/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-kube-api-access-p7skc\") pod \"community-operators-dgppk\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.661048 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:29 crc kubenswrapper[5029]: I0313 21:05:29.693015 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:30 crc kubenswrapper[5029]: I0313 21:05:30.305586 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dgppk"] Mar 13 21:05:30 crc kubenswrapper[5029]: I0313 21:05:30.622143 5029 generic.go:334] "Generic (PLEG): container finished" podID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerID="93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f" exitCode=0 Mar 13 21:05:30 crc kubenswrapper[5029]: I0313 21:05:30.622910 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgppk" event={"ID":"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2","Type":"ContainerDied","Data":"93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f"} Mar 13 21:05:30 crc kubenswrapper[5029]: I0313 21:05:30.622978 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgppk" event={"ID":"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2","Type":"ContainerStarted","Data":"f8d02bdf88092a5562631615796e858e8624aedfdaf8239c28570afd07d659d2"} Mar 13 21:05:31 crc kubenswrapper[5029]: I0313 21:05:31.635221 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgppk" event={"ID":"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2","Type":"ContainerStarted","Data":"2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f"} Mar 13 21:05:31 crc kubenswrapper[5029]: I0313 21:05:31.940534 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzpk"] Mar 13 21:05:31 crc kubenswrapper[5029]: I0313 21:05:31.940896 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-twzpk" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerName="registry-server" containerID="cri-o://b66168f331ab51333cd7c9299ada6e09f676cc011ae2deb185352bd37399272b" gracePeriod=2 Mar 13 21:05:31 crc kubenswrapper[5029]: I0313 21:05:31.950142 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:05:31 crc kubenswrapper[5029]: I0313 21:05:31.950218 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:05:31 crc kubenswrapper[5029]: I0313 21:05:31.950276 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:05:31 crc kubenswrapper[5029]: I0313 21:05:31.951569 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b3472f16590166b79f46f34c6217c66c7d8b48ea3fca5ec24ca6412baf78724c"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:05:31 crc kubenswrapper[5029]: I0313 21:05:31.951641 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://b3472f16590166b79f46f34c6217c66c7d8b48ea3fca5ec24ca6412baf78724c" gracePeriod=600 Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.652035 5029 generic.go:334] "Generic (PLEG): container finished" podID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerID="2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f" exitCode=0 Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.652122 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgppk" event={"ID":"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2","Type":"ContainerDied","Data":"2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f"} Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.658764 5029 generic.go:334] "Generic (PLEG): container finished" podID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerID="b66168f331ab51333cd7c9299ada6e09f676cc011ae2deb185352bd37399272b" exitCode=0 Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.658889 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzpk" event={"ID":"422f4b9f-2373-4076-95f1-1b9c8c19627d","Type":"ContainerDied","Data":"b66168f331ab51333cd7c9299ada6e09f676cc011ae2deb185352bd37399272b"} Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.663758 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="b3472f16590166b79f46f34c6217c66c7d8b48ea3fca5ec24ca6412baf78724c" exitCode=0 Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.663834 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"b3472f16590166b79f46f34c6217c66c7d8b48ea3fca5ec24ca6412baf78724c"} Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.663911 5029 scope.go:117] "RemoveContainer" containerID="6fe5b0d159fb295798c4d0957c291bf12dcd3c968aab2f905359185487745aa1" Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.940655 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.999334 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-utilities\") pod \"422f4b9f-2373-4076-95f1-1b9c8c19627d\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.999561 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvdsv\" (UniqueName: \"kubernetes.io/projected/422f4b9f-2373-4076-95f1-1b9c8c19627d-kube-api-access-pvdsv\") pod \"422f4b9f-2373-4076-95f1-1b9c8c19627d\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " Mar 13 21:05:32 crc kubenswrapper[5029]: I0313 21:05:32.999736 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-catalog-content\") pod \"422f4b9f-2373-4076-95f1-1b9c8c19627d\" (UID: \"422f4b9f-2373-4076-95f1-1b9c8c19627d\") " Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.000421 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-utilities" (OuterVolumeSpecName: "utilities") pod "422f4b9f-2373-4076-95f1-1b9c8c19627d" (UID: "422f4b9f-2373-4076-95f1-1b9c8c19627d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.007971 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422f4b9f-2373-4076-95f1-1b9c8c19627d-kube-api-access-pvdsv" (OuterVolumeSpecName: "kube-api-access-pvdsv") pod "422f4b9f-2373-4076-95f1-1b9c8c19627d" (UID: "422f4b9f-2373-4076-95f1-1b9c8c19627d"). InnerVolumeSpecName "kube-api-access-pvdsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.030045 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "422f4b9f-2373-4076-95f1-1b9c8c19627d" (UID: "422f4b9f-2373-4076-95f1-1b9c8c19627d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.103511 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.103670 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422f4b9f-2373-4076-95f1-1b9c8c19627d-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.103697 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvdsv\" (UniqueName: \"kubernetes.io/projected/422f4b9f-2373-4076-95f1-1b9c8c19627d-kube-api-access-pvdsv\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.678792 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgppk" event={"ID":"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2","Type":"ContainerStarted","Data":"f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236"} Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.682679 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzpk" event={"ID":"422f4b9f-2373-4076-95f1-1b9c8c19627d","Type":"ContainerDied","Data":"ffa49ceafa0efc0dc3d031c7305e3f6e580554b773fddcc6565eba1be1ca661f"} Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.682746 5029 scope.go:117] "RemoveContainer" containerID="b66168f331ab51333cd7c9299ada6e09f676cc011ae2deb185352bd37399272b" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.682922 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twzpk" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.702276 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db"} Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.702709 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dgppk" podStartSLOduration=2.259149594 podStartE2EDuration="4.702677304s" podCreationTimestamp="2026-03-13 21:05:29 +0000 UTC" firstStartedPulling="2026-03-13 21:05:30.658775524 +0000 UTC m=+2290.674857927" lastFinishedPulling="2026-03-13 21:05:33.102303244 +0000 UTC m=+2293.118385637" observedRunningTime="2026-03-13 21:05:33.697077157 +0000 UTC m=+2293.713159560" watchObservedRunningTime="2026-03-13 21:05:33.702677304 +0000 UTC m=+2293.718759707" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.718024 5029 scope.go:117] "RemoveContainer" containerID="68ace0ac7586fc4494f92bfb8994f84fd140c25d86e7c102c8b6e98ae71e9d87" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.750318 5029 scope.go:117] "RemoveContainer" containerID="117a9ca04860556e6ca1f133570daba6809c6469542291e9473106283ecad351" Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.761713 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzpk"] Mar 13 21:05:33 crc kubenswrapper[5029]: I0313 21:05:33.775721 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzpk"] Mar 13 21:05:34 crc kubenswrapper[5029]: I0313 21:05:34.612076 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" path="/var/lib/kubelet/pods/422f4b9f-2373-4076-95f1-1b9c8c19627d/volumes" Mar 13 21:05:39 crc kubenswrapper[5029]: I0313 21:05:39.693703 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:39 crc kubenswrapper[5029]: I0313 21:05:39.694377 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:39 crc kubenswrapper[5029]: I0313 21:05:39.751766 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:39 crc kubenswrapper[5029]: I0313 21:05:39.849717 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:39 crc kubenswrapper[5029]: I0313 21:05:39.996048 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dgppk"] Mar 13 21:05:41 crc kubenswrapper[5029]: I0313 21:05:41.811512 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dgppk" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerName="registry-server" containerID="cri-o://f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236" gracePeriod=2 Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.300365 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.441057 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-utilities\") pod \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.441624 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7skc\" (UniqueName: \"kubernetes.io/projected/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-kube-api-access-p7skc\") pod \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.441898 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-catalog-content\") pod \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\" (UID: \"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2\") " Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.442888 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-utilities" (OuterVolumeSpecName: "utilities") pod "7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" (UID: "7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.475220 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-kube-api-access-p7skc" (OuterVolumeSpecName: "kube-api-access-p7skc") pod "7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" (UID: "7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2"). InnerVolumeSpecName "kube-api-access-p7skc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.507659 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" (UID: "7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.544764 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.544903 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7skc\" (UniqueName: \"kubernetes.io/projected/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-kube-api-access-p7skc\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.544916 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.822380 5029 generic.go:334] "Generic (PLEG): container finished" podID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerID="f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236" exitCode=0 Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.822484 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgppk" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.822463 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgppk" event={"ID":"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2","Type":"ContainerDied","Data":"f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236"} Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.822683 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgppk" event={"ID":"7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2","Type":"ContainerDied","Data":"f8d02bdf88092a5562631615796e858e8624aedfdaf8239c28570afd07d659d2"} Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.822715 5029 scope.go:117] "RemoveContainer" containerID="f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.847128 5029 scope.go:117] "RemoveContainer" containerID="2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.855231 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dgppk"] Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.869646 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dgppk"] Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.874930 5029 scope.go:117] "RemoveContainer" containerID="93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.916701 5029 scope.go:117] "RemoveContainer" containerID="f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236" Mar 13 21:05:42 crc kubenswrapper[5029]: E0313 21:05:42.917317 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236\": container with ID starting with f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236 not found: ID does not exist" containerID="f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.917358 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236"} err="failed to get container status \"f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236\": rpc error: code = NotFound desc = could not find container \"f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236\": container with ID starting with f7a6ff917dcb01f2fa63c922e2afdc348c7f74b93153a2e546f762da6df24236 not found: ID does not exist" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.917384 5029 scope.go:117] "RemoveContainer" containerID="2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f" Mar 13 21:05:42 crc kubenswrapper[5029]: E0313 21:05:42.917731 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f\": container with ID starting with 2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f not found: ID does not exist" containerID="2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.917766 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f"} err="failed to get container status \"2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f\": rpc error: code = NotFound desc = could not find container \"2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f\": container with ID starting with 2369c2dd64f9e4448cee48124717c15453be2fe112154dc2c8951ccfb783543f not found: ID does not exist" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.917787 5029 scope.go:117] "RemoveContainer" containerID="93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f" Mar 13 21:05:42 crc kubenswrapper[5029]: E0313 21:05:42.918228 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f\": container with ID starting with 93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f not found: ID does not exist" containerID="93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f" Mar 13 21:05:42 crc kubenswrapper[5029]: I0313 21:05:42.918256 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f"} err="failed to get container status \"93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f\": rpc error: code = NotFound desc = could not find container \"93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f\": container with ID starting with 93a649142ba70e1d5122337446562657d59e4aea321f32761d3516f78290336f not found: ID does not exist" Mar 13 21:05:44 crc kubenswrapper[5029]: I0313 21:05:44.614008 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" path="/var/lib/kubelet/pods/7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2/volumes" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.144260 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557266-7qt9r"] Mar 13 21:06:00 crc kubenswrapper[5029]: E0313 21:06:00.146539 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerName="extract-content" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.146564 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerName="extract-content" Mar 13 21:06:00 crc kubenswrapper[5029]: E0313 21:06:00.146583 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerName="registry-server" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.146590 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerName="registry-server" Mar 13 21:06:00 crc kubenswrapper[5029]: E0313 21:06:00.146609 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerName="registry-server" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.146615 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerName="registry-server" Mar 13 21:06:00 crc kubenswrapper[5029]: E0313 21:06:00.146629 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerName="extract-content" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.146635 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerName="extract-content" Mar 13 21:06:00 crc kubenswrapper[5029]: E0313 21:06:00.146645 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerName="extract-utilities" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.146651 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerName="extract-utilities" Mar 13 21:06:00 crc kubenswrapper[5029]: E0313 21:06:00.146659 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerName="extract-utilities" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.146664 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerName="extract-utilities" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.146892 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c96da24-ff2b-4a6f-86ca-340a2d7ee0f2" containerName="registry-server" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.146926 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="422f4b9f-2373-4076-95f1-1b9c8c19627d" containerName="registry-server" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.147729 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557266-7qt9r" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.152194 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.152650 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.152964 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.156490 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557266-7qt9r"] Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.238567 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294f2\" (UniqueName: \"kubernetes.io/projected/4b3b9048-9bad-4203-bf18-7514da4c4d36-kube-api-access-294f2\") pod \"auto-csr-approver-29557266-7qt9r\" (UID: \"4b3b9048-9bad-4203-bf18-7514da4c4d36\") " pod="openshift-infra/auto-csr-approver-29557266-7qt9r" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.340874 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-294f2\" (UniqueName: \"kubernetes.io/projected/4b3b9048-9bad-4203-bf18-7514da4c4d36-kube-api-access-294f2\") pod \"auto-csr-approver-29557266-7qt9r\" (UID: \"4b3b9048-9bad-4203-bf18-7514da4c4d36\") " pod="openshift-infra/auto-csr-approver-29557266-7qt9r" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.360576 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-294f2\" (UniqueName: \"kubernetes.io/projected/4b3b9048-9bad-4203-bf18-7514da4c4d36-kube-api-access-294f2\") pod \"auto-csr-approver-29557266-7qt9r\" (UID: \"4b3b9048-9bad-4203-bf18-7514da4c4d36\") " pod="openshift-infra/auto-csr-approver-29557266-7qt9r" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.511223 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557266-7qt9r" Mar 13 21:06:00 crc kubenswrapper[5029]: I0313 21:06:00.956786 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557266-7qt9r"] Mar 13 21:06:01 crc kubenswrapper[5029]: I0313 21:06:01.015006 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557266-7qt9r" event={"ID":"4b3b9048-9bad-4203-bf18-7514da4c4d36","Type":"ContainerStarted","Data":"1952868e59dab2a83d12c79bd4d4f5c466ee465584f7c23cabb1267e1c77b535"} Mar 13 21:06:03 crc kubenswrapper[5029]: I0313 21:06:03.039807 5029 generic.go:334] "Generic (PLEG): container finished" podID="4b3b9048-9bad-4203-bf18-7514da4c4d36" containerID="e29d9c30aff95abe82ae1de64cd6d5f10803bcd8911b50c3dc83f20fddf5d1c0" exitCode=0 Mar 13 21:06:03 crc kubenswrapper[5029]: I0313 21:06:03.039916 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557266-7qt9r" event={"ID":"4b3b9048-9bad-4203-bf18-7514da4c4d36","Type":"ContainerDied","Data":"e29d9c30aff95abe82ae1de64cd6d5f10803bcd8911b50c3dc83f20fddf5d1c0"} Mar 13 21:06:04 crc kubenswrapper[5029]: I0313 21:06:04.420985 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557266-7qt9r" Mar 13 21:06:04 crc kubenswrapper[5029]: I0313 21:06:04.565727 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-294f2\" (UniqueName: \"kubernetes.io/projected/4b3b9048-9bad-4203-bf18-7514da4c4d36-kube-api-access-294f2\") pod \"4b3b9048-9bad-4203-bf18-7514da4c4d36\" (UID: \"4b3b9048-9bad-4203-bf18-7514da4c4d36\") " Mar 13 21:06:04 crc kubenswrapper[5029]: I0313 21:06:04.578079 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b3b9048-9bad-4203-bf18-7514da4c4d36-kube-api-access-294f2" (OuterVolumeSpecName: "kube-api-access-294f2") pod "4b3b9048-9bad-4203-bf18-7514da4c4d36" (UID: "4b3b9048-9bad-4203-bf18-7514da4c4d36"). InnerVolumeSpecName "kube-api-access-294f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:06:04 crc kubenswrapper[5029]: I0313 21:06:04.668337 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-294f2\" (UniqueName: \"kubernetes.io/projected/4b3b9048-9bad-4203-bf18-7514da4c4d36-kube-api-access-294f2\") on node \"crc\" DevicePath \"\"" Mar 13 21:06:05 crc kubenswrapper[5029]: I0313 21:06:05.059636 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557266-7qt9r" event={"ID":"4b3b9048-9bad-4203-bf18-7514da4c4d36","Type":"ContainerDied","Data":"1952868e59dab2a83d12c79bd4d4f5c466ee465584f7c23cabb1267e1c77b535"} Mar 13 21:06:05 crc kubenswrapper[5029]: I0313 21:06:05.059681 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1952868e59dab2a83d12c79bd4d4f5c466ee465584f7c23cabb1267e1c77b535" Mar 13 21:06:05 crc kubenswrapper[5029]: I0313 21:06:05.059750 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557266-7qt9r" Mar 13 21:06:05 crc kubenswrapper[5029]: I0313 21:06:05.493982 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557260-sbf99"] Mar 13 21:06:05 crc kubenswrapper[5029]: I0313 21:06:05.503019 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557260-sbf99"] Mar 13 21:06:06 crc kubenswrapper[5029]: I0313 21:06:06.621579 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a" path="/var/lib/kubelet/pods/cdde5ceb-44a0-4e21-b8e1-2dc5ee3fcc0a/volumes" Mar 13 21:06:12 crc kubenswrapper[5029]: I0313 21:06:12.201967 5029 scope.go:117] "RemoveContainer" containerID="c921ed331d998f4e3e9e1a9dc3c8bb1339db688aa9a67c78b1f9ba13c0f90101" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.150768 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557268-5fw2s"] Mar 13 21:08:00 crc kubenswrapper[5029]: E0313 21:08:00.151988 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3b9048-9bad-4203-bf18-7514da4c4d36" containerName="oc" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.152010 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3b9048-9bad-4203-bf18-7514da4c4d36" containerName="oc" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.152326 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b3b9048-9bad-4203-bf18-7514da4c4d36" containerName="oc" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.153170 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.156710 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.158360 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.165679 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.166389 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557268-5fw2s"] Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.213309 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhbd\" (UniqueName: \"kubernetes.io/projected/ec600491-3bd7-45bb-8015-36719d4a56cd-kube-api-access-klhbd\") pod \"auto-csr-approver-29557268-5fw2s\" (UID: \"ec600491-3bd7-45bb-8015-36719d4a56cd\") " pod="openshift-infra/auto-csr-approver-29557268-5fw2s" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.315662 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klhbd\" (UniqueName: \"kubernetes.io/projected/ec600491-3bd7-45bb-8015-36719d4a56cd-kube-api-access-klhbd\") pod \"auto-csr-approver-29557268-5fw2s\" (UID: \"ec600491-3bd7-45bb-8015-36719d4a56cd\") " pod="openshift-infra/auto-csr-approver-29557268-5fw2s" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.340163 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klhbd\" (UniqueName: \"kubernetes.io/projected/ec600491-3bd7-45bb-8015-36719d4a56cd-kube-api-access-klhbd\") pod \"auto-csr-approver-29557268-5fw2s\" (UID: \"ec600491-3bd7-45bb-8015-36719d4a56cd\") " pod="openshift-infra/auto-csr-approver-29557268-5fw2s" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.479617 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" Mar 13 21:08:00 crc kubenswrapper[5029]: I0313 21:08:00.961920 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557268-5fw2s"] Mar 13 21:08:01 crc kubenswrapper[5029]: I0313 21:08:01.268730 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" event={"ID":"ec600491-3bd7-45bb-8015-36719d4a56cd","Type":"ContainerStarted","Data":"b2469dbf9c993f929b8af3681e8093ec585e4baf35816140ad7649255d54818f"} Mar 13 21:08:01 crc kubenswrapper[5029]: I0313 21:08:01.950107 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:08:01 crc kubenswrapper[5029]: I0313 21:08:01.950665 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:08:02 crc kubenswrapper[5029]: I0313 21:08:02.281642 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" event={"ID":"ec600491-3bd7-45bb-8015-36719d4a56cd","Type":"ContainerStarted","Data":"d6b647e54d957f31673d6eaa295d3f5b3a297626ef55cca04b5c01eb49144471"} Mar 13 21:08:02 crc kubenswrapper[5029]: I0313 21:08:02.303345 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" podStartSLOduration=1.275319309 podStartE2EDuration="2.303318724s" podCreationTimestamp="2026-03-13 21:08:00 +0000 UTC" firstStartedPulling="2026-03-13 21:08:00.963114781 +0000 UTC m=+2440.979197184" lastFinishedPulling="2026-03-13 21:08:01.991114196 +0000 UTC m=+2442.007196599" observedRunningTime="2026-03-13 21:08:02.294178847 +0000 UTC m=+2442.310261250" watchObservedRunningTime="2026-03-13 21:08:02.303318724 +0000 UTC m=+2442.319401127" Mar 13 21:08:03 crc kubenswrapper[5029]: I0313 21:08:03.293245 5029 generic.go:334] "Generic (PLEG): container finished" podID="ec600491-3bd7-45bb-8015-36719d4a56cd" containerID="d6b647e54d957f31673d6eaa295d3f5b3a297626ef55cca04b5c01eb49144471" exitCode=0 Mar 13 21:08:03 crc kubenswrapper[5029]: I0313 21:08:03.293656 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" event={"ID":"ec600491-3bd7-45bb-8015-36719d4a56cd","Type":"ContainerDied","Data":"d6b647e54d957f31673d6eaa295d3f5b3a297626ef55cca04b5c01eb49144471"} Mar 13 21:08:04 crc kubenswrapper[5029]: I0313 21:08:04.698596 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" Mar 13 21:08:04 crc kubenswrapper[5029]: I0313 21:08:04.727179 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klhbd\" (UniqueName: \"kubernetes.io/projected/ec600491-3bd7-45bb-8015-36719d4a56cd-kube-api-access-klhbd\") pod \"ec600491-3bd7-45bb-8015-36719d4a56cd\" (UID: \"ec600491-3bd7-45bb-8015-36719d4a56cd\") " Mar 13 21:08:04 crc kubenswrapper[5029]: I0313 21:08:04.735056 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec600491-3bd7-45bb-8015-36719d4a56cd-kube-api-access-klhbd" (OuterVolumeSpecName: "kube-api-access-klhbd") pod "ec600491-3bd7-45bb-8015-36719d4a56cd" (UID: "ec600491-3bd7-45bb-8015-36719d4a56cd"). InnerVolumeSpecName "kube-api-access-klhbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:08:04 crc kubenswrapper[5029]: I0313 21:08:04.829212 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klhbd\" (UniqueName: \"kubernetes.io/projected/ec600491-3bd7-45bb-8015-36719d4a56cd-kube-api-access-klhbd\") on node \"crc\" DevicePath \"\"" Mar 13 21:08:05 crc kubenswrapper[5029]: I0313 21:08:05.317917 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" event={"ID":"ec600491-3bd7-45bb-8015-36719d4a56cd","Type":"ContainerDied","Data":"b2469dbf9c993f929b8af3681e8093ec585e4baf35816140ad7649255d54818f"} Mar 13 21:08:05 crc kubenswrapper[5029]: I0313 21:08:05.317987 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2469dbf9c993f929b8af3681e8093ec585e4baf35816140ad7649255d54818f" Mar 13 21:08:05 crc kubenswrapper[5029]: I0313 21:08:05.318103 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557268-5fw2s" Mar 13 21:08:05 crc kubenswrapper[5029]: I0313 21:08:05.378027 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557262-jvdhx"] Mar 13 21:08:05 crc kubenswrapper[5029]: I0313 21:08:05.389479 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557262-jvdhx"] Mar 13 21:08:06 crc kubenswrapper[5029]: I0313 21:08:06.619826 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c211696-0d5f-416b-8fff-e0294fa5542a" path="/var/lib/kubelet/pods/9c211696-0d5f-416b-8fff-e0294fa5542a/volumes" Mar 13 21:08:12 crc kubenswrapper[5029]: I0313 21:08:12.324294 5029 scope.go:117] "RemoveContainer" containerID="5522695e3c601c3fa50856011a32bb1c8d85c4a3b50585386e29cd6eaf955657" Mar 13 21:08:31 crc kubenswrapper[5029]: I0313 21:08:31.950762 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:08:31 crc kubenswrapper[5029]: I0313 21:08:31.951407 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:09:01 crc kubenswrapper[5029]: I0313 21:09:01.950368 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:09:01 crc kubenswrapper[5029]: I0313 21:09:01.950997 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:09:01 crc kubenswrapper[5029]: I0313 21:09:01.951048 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:09:01 crc kubenswrapper[5029]: I0313 21:09:01.951951 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:09:01 crc kubenswrapper[5029]: I0313 21:09:01.952037 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" gracePeriod=600 Mar 13 21:09:02 crc kubenswrapper[5029]: E0313 21:09:02.075384 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:09:02 crc kubenswrapper[5029]: I0313 21:09:02.905410 5029 generic.go:334] "Generic (PLEG): container finished" podID="103d724b-82ad-4507-960c-6739fa89ab17" containerID="f2d521d74090b480181f4f413818fedcf33adc320c87e37ea994b647e0289c8e" exitCode=0 Mar 13 21:09:02 crc kubenswrapper[5029]: I0313 21:09:02.906010 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" event={"ID":"103d724b-82ad-4507-960c-6739fa89ab17","Type":"ContainerDied","Data":"f2d521d74090b480181f4f413818fedcf33adc320c87e37ea994b647e0289c8e"} Mar 13 21:09:02 crc kubenswrapper[5029]: I0313 21:09:02.909674 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" exitCode=0 Mar 13 21:09:02 crc kubenswrapper[5029]: I0313 21:09:02.909844 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db"} Mar 13 21:09:02 crc kubenswrapper[5029]: I0313 21:09:02.909953 5029 scope.go:117] "RemoveContainer" containerID="b3472f16590166b79f46f34c6217c66c7d8b48ea3fca5ec24ca6412baf78724c" Mar 13 21:09:02 crc kubenswrapper[5029]: I0313 21:09:02.910701 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:09:02 crc kubenswrapper[5029]: E0313 21:09:02.911066 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.487299 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.598763 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-secret-0\") pod \"103d724b-82ad-4507-960c-6739fa89ab17\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.598942 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl6s8\" (UniqueName: \"kubernetes.io/projected/103d724b-82ad-4507-960c-6739fa89ab17-kube-api-access-gl6s8\") pod \"103d724b-82ad-4507-960c-6739fa89ab17\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.599082 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-combined-ca-bundle\") pod \"103d724b-82ad-4507-960c-6739fa89ab17\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.599153 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-ssh-key-openstack-edpm-ipam\") pod \"103d724b-82ad-4507-960c-6739fa89ab17\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.599262 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-inventory\") pod \"103d724b-82ad-4507-960c-6739fa89ab17\" (UID: \"103d724b-82ad-4507-960c-6739fa89ab17\") " Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.606400 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103d724b-82ad-4507-960c-6739fa89ab17-kube-api-access-gl6s8" (OuterVolumeSpecName: "kube-api-access-gl6s8") pod "103d724b-82ad-4507-960c-6739fa89ab17" (UID: "103d724b-82ad-4507-960c-6739fa89ab17"). InnerVolumeSpecName "kube-api-access-gl6s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.609177 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "103d724b-82ad-4507-960c-6739fa89ab17" (UID: "103d724b-82ad-4507-960c-6739fa89ab17"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.631030 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-inventory" (OuterVolumeSpecName: "inventory") pod "103d724b-82ad-4507-960c-6739fa89ab17" (UID: "103d724b-82ad-4507-960c-6739fa89ab17"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.638423 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "103d724b-82ad-4507-960c-6739fa89ab17" (UID: "103d724b-82ad-4507-960c-6739fa89ab17"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.641157 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "103d724b-82ad-4507-960c-6739fa89ab17" (UID: "103d724b-82ad-4507-960c-6739fa89ab17"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.704125 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl6s8\" (UniqueName: \"kubernetes.io/projected/103d724b-82ad-4507-960c-6739fa89ab17-kube-api-access-gl6s8\") on node \"crc\" DevicePath \"\"" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.704163 5029 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.704174 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.704207 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.704221 5029 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/103d724b-82ad-4507-960c-6739fa89ab17-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.931538 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" event={"ID":"103d724b-82ad-4507-960c-6739fa89ab17","Type":"ContainerDied","Data":"d99696f6e163798d1d8acd68f44150d95dc0bb4b00ac2655858faa24a3768e44"} Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.931589 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99696f6e163798d1d8acd68f44150d95dc0bb4b00ac2655858faa24a3768e44" Mar 13 21:09:04 crc kubenswrapper[5029]: I0313 21:09:04.931598 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.043899 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4"] Mar 13 21:09:05 crc kubenswrapper[5029]: E0313 21:09:05.044614 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d724b-82ad-4507-960c-6739fa89ab17" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.044640 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d724b-82ad-4507-960c-6739fa89ab17" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 13 21:09:05 crc kubenswrapper[5029]: E0313 21:09:05.044671 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec600491-3bd7-45bb-8015-36719d4a56cd" containerName="oc" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.044679 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec600491-3bd7-45bb-8015-36719d4a56cd" containerName="oc" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.046018 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec600491-3bd7-45bb-8015-36719d4a56cd" containerName="oc" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.046049 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="103d724b-82ad-4507-960c-6739fa89ab17" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.046951 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.050478 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.050808 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.050947 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.052263 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.052419 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.052583 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.056110 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.065234 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4"] Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223240 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223400 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223474 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223554 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223627 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223673 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223701 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223797 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223835 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.223920 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfvmp\" (UniqueName: \"kubernetes.io/projected/b58e81ba-bde3-4a48-b2b6-9e52514608eb-kube-api-access-lfvmp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.224369 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326389 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326448 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfvmp\" (UniqueName: \"kubernetes.io/projected/b58e81ba-bde3-4a48-b2b6-9e52514608eb-kube-api-access-lfvmp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326513 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326568 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326585 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326633 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326673 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326693 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326717 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326754 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.326803 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.328655 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.331655 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.332893 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.332897 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.333486 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.333937 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.334377 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.335017 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.335022 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.335105 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.351273 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfvmp\" (UniqueName: \"kubernetes.io/projected/b58e81ba-bde3-4a48-b2b6-9e52514608eb-kube-api-access-lfvmp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fkfg4\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.369018 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.918927 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4"] Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.921192 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:09:05 crc kubenswrapper[5029]: I0313 21:09:05.944126 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" event={"ID":"b58e81ba-bde3-4a48-b2b6-9e52514608eb","Type":"ContainerStarted","Data":"e90b34c3e0899a35f8716d1fbc72c58b754399b602d25db8c9b44ae1c16056c3"} Mar 13 21:09:07 crc kubenswrapper[5029]: I0313 21:09:07.993403 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" event={"ID":"b58e81ba-bde3-4a48-b2b6-9e52514608eb","Type":"ContainerStarted","Data":"046bcbd945fb64403af179ab7227a08d9a35d83f5a776707c3b43ce028b08970"} Mar 13 21:09:08 crc kubenswrapper[5029]: I0313 21:09:08.038518 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" podStartSLOduration=1.3952581450000001 podStartE2EDuration="3.038496929s" podCreationTimestamp="2026-03-13 21:09:05 +0000 UTC" firstStartedPulling="2026-03-13 21:09:05.920961945 +0000 UTC m=+2505.937044348" lastFinishedPulling="2026-03-13 21:09:07.564200729 +0000 UTC m=+2507.580283132" observedRunningTime="2026-03-13 21:09:08.033670474 +0000 UTC m=+2508.049752887" watchObservedRunningTime="2026-03-13 21:09:08.038496929 +0000 UTC m=+2508.054579332" Mar 13 21:09:15 crc kubenswrapper[5029]: I0313 21:09:15.600301 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:09:15 crc kubenswrapper[5029]: E0313 21:09:15.601617 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:09:26 crc kubenswrapper[5029]: I0313 21:09:26.600958 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:09:26 crc kubenswrapper[5029]: E0313 21:09:26.601761 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:09:40 crc kubenswrapper[5029]: I0313 21:09:40.608814 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:09:40 crc kubenswrapper[5029]: E0313 21:09:40.610252 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:09:55 crc kubenswrapper[5029]: I0313 21:09:55.599752 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:09:55 crc kubenswrapper[5029]: E0313 21:09:55.600772 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.162924 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557270-2c9d9"] Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.165082 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557270-2c9d9" Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.169637 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.172235 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.172662 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557270-2c9d9"] Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.176201 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.211673 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skkf2\" (UniqueName: \"kubernetes.io/projected/048deb4f-2208-41fd-93b3-210f84bc8203-kube-api-access-skkf2\") pod \"auto-csr-approver-29557270-2c9d9\" (UID: \"048deb4f-2208-41fd-93b3-210f84bc8203\") " pod="openshift-infra/auto-csr-approver-29557270-2c9d9" Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.313713 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skkf2\" (UniqueName: \"kubernetes.io/projected/048deb4f-2208-41fd-93b3-210f84bc8203-kube-api-access-skkf2\") pod \"auto-csr-approver-29557270-2c9d9\" (UID: \"048deb4f-2208-41fd-93b3-210f84bc8203\") " pod="openshift-infra/auto-csr-approver-29557270-2c9d9" Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.334127 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skkf2\" (UniqueName: \"kubernetes.io/projected/048deb4f-2208-41fd-93b3-210f84bc8203-kube-api-access-skkf2\") pod \"auto-csr-approver-29557270-2c9d9\" (UID: \"048deb4f-2208-41fd-93b3-210f84bc8203\") " pod="openshift-infra/auto-csr-approver-29557270-2c9d9" Mar 13 21:10:00 crc kubenswrapper[5029]: I0313 21:10:00.487517 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557270-2c9d9" Mar 13 21:10:01 crc kubenswrapper[5029]: I0313 21:10:01.020652 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557270-2c9d9"] Mar 13 21:10:01 crc kubenswrapper[5029]: I0313 21:10:01.557937 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557270-2c9d9" event={"ID":"048deb4f-2208-41fd-93b3-210f84bc8203","Type":"ContainerStarted","Data":"8a9455b849d8351208960f773de7c4af2ebf4fd5cdf18dcb640af494ef1106bd"} Mar 13 21:10:03 crc kubenswrapper[5029]: I0313 21:10:03.579544 5029 generic.go:334] "Generic (PLEG): container finished" podID="048deb4f-2208-41fd-93b3-210f84bc8203" containerID="473fc57923e7a6b3c85837a607f6246ef10e4485b9ed990c8abc5ac06b267b8f" exitCode=0 Mar 13 21:10:03 crc kubenswrapper[5029]: I0313 21:10:03.579615 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557270-2c9d9" event={"ID":"048deb4f-2208-41fd-93b3-210f84bc8203","Type":"ContainerDied","Data":"473fc57923e7a6b3c85837a607f6246ef10e4485b9ed990c8abc5ac06b267b8f"} Mar 13 21:10:04 crc kubenswrapper[5029]: I0313 21:10:04.938675 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557270-2c9d9" Mar 13 21:10:05 crc kubenswrapper[5029]: I0313 21:10:05.029172 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skkf2\" (UniqueName: \"kubernetes.io/projected/048deb4f-2208-41fd-93b3-210f84bc8203-kube-api-access-skkf2\") pod \"048deb4f-2208-41fd-93b3-210f84bc8203\" (UID: \"048deb4f-2208-41fd-93b3-210f84bc8203\") " Mar 13 21:10:05 crc kubenswrapper[5029]: I0313 21:10:05.037393 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/048deb4f-2208-41fd-93b3-210f84bc8203-kube-api-access-skkf2" (OuterVolumeSpecName: "kube-api-access-skkf2") pod "048deb4f-2208-41fd-93b3-210f84bc8203" (UID: "048deb4f-2208-41fd-93b3-210f84bc8203"). InnerVolumeSpecName "kube-api-access-skkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:10:05 crc kubenswrapper[5029]: I0313 21:10:05.133008 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skkf2\" (UniqueName: \"kubernetes.io/projected/048deb4f-2208-41fd-93b3-210f84bc8203-kube-api-access-skkf2\") on node \"crc\" DevicePath \"\"" Mar 13 21:10:05 crc kubenswrapper[5029]: I0313 21:10:05.601866 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557270-2c9d9" event={"ID":"048deb4f-2208-41fd-93b3-210f84bc8203","Type":"ContainerDied","Data":"8a9455b849d8351208960f773de7c4af2ebf4fd5cdf18dcb640af494ef1106bd"} Mar 13 21:10:05 crc kubenswrapper[5029]: I0313 21:10:05.601905 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9455b849d8351208960f773de7c4af2ebf4fd5cdf18dcb640af494ef1106bd" Mar 13 21:10:05 crc kubenswrapper[5029]: I0313 21:10:05.601956 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557270-2c9d9" Mar 13 21:10:06 crc kubenswrapper[5029]: I0313 21:10:06.039191 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557264-wrd8g"] Mar 13 21:10:06 crc kubenswrapper[5029]: I0313 21:10:06.048723 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557264-wrd8g"] Mar 13 21:10:06 crc kubenswrapper[5029]: I0313 21:10:06.611365 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2274a9fc-5569-4924-8713-c048d72509bf" path="/var/lib/kubelet/pods/2274a9fc-5569-4924-8713-c048d72509bf/volumes" Mar 13 21:10:08 crc kubenswrapper[5029]: I0313 21:10:08.600044 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:10:08 crc kubenswrapper[5029]: E0313 21:10:08.600901 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:10:12 crc kubenswrapper[5029]: I0313 21:10:12.431956 5029 scope.go:117] "RemoveContainer" containerID="4a4c5fc6e4dcc19dde1dc505cfc6ce8b19e8eaf0ce37294a9b0248bd3bc0ab8e" Mar 13 21:10:19 crc kubenswrapper[5029]: I0313 21:10:19.600288 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:10:19 crc kubenswrapper[5029]: E0313 21:10:19.601117 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:10:30 crc kubenswrapper[5029]: I0313 21:10:30.609163 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:10:30 crc kubenswrapper[5029]: E0313 21:10:30.610098 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:10:44 crc kubenswrapper[5029]: I0313 21:10:44.600841 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:10:44 crc kubenswrapper[5029]: E0313 21:10:44.602067 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:10:59 crc kubenswrapper[5029]: I0313 21:10:59.600216 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:10:59 crc kubenswrapper[5029]: E0313 21:10:59.600992 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:11:14 crc kubenswrapper[5029]: I0313 21:11:14.600734 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:11:14 crc kubenswrapper[5029]: E0313 21:11:14.601771 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:11:25 crc kubenswrapper[5029]: I0313 21:11:25.600735 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:11:25 crc kubenswrapper[5029]: E0313 21:11:25.601507 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:11:35 crc kubenswrapper[5029]: I0313 21:11:35.464562 5029 generic.go:334] "Generic (PLEG): container finished" podID="b58e81ba-bde3-4a48-b2b6-9e52514608eb" containerID="046bcbd945fb64403af179ab7227a08d9a35d83f5a776707c3b43ce028b08970" exitCode=0 Mar 13 21:11:35 crc kubenswrapper[5029]: I0313 21:11:35.465004 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" event={"ID":"b58e81ba-bde3-4a48-b2b6-9e52514608eb","Type":"ContainerDied","Data":"046bcbd945fb64403af179ab7227a08d9a35d83f5a776707c3b43ce028b08970"} Mar 13 21:11:36 crc kubenswrapper[5029]: I0313 21:11:36.932987 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.124338 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-inventory\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.124499 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-0\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.124644 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-1\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.124682 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-3\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.124740 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-2\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.124810 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-extra-config-0\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.124835 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfvmp\" (UniqueName: \"kubernetes.io/projected/b58e81ba-bde3-4a48-b2b6-9e52514608eb-kube-api-access-lfvmp\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.124883 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-1\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.125510 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-0\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.125547 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-ssh-key-openstack-edpm-ipam\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.125603 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-combined-ca-bundle\") pod \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\" (UID: \"b58e81ba-bde3-4a48-b2b6-9e52514608eb\") " Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.147108 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58e81ba-bde3-4a48-b2b6-9e52514608eb-kube-api-access-lfvmp" (OuterVolumeSpecName: "kube-api-access-lfvmp") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "kube-api-access-lfvmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.161090 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.201261 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-inventory" (OuterVolumeSpecName: "inventory") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.218524 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.220619 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.224493 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.229770 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.230433 5029 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.230469 5029 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.230486 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfvmp\" (UniqueName: \"kubernetes.io/projected/b58e81ba-bde3-4a48-b2b6-9e52514608eb-kube-api-access-lfvmp\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.230501 5029 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.230517 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.230531 5029 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.230549 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.238281 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.253615 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.256341 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.263002 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b58e81ba-bde3-4a48-b2b6-9e52514608eb" (UID: "b58e81ba-bde3-4a48-b2b6-9e52514608eb"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.333033 5029 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.333087 5029 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.333103 5029 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.333112 5029 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b58e81ba-bde3-4a48-b2b6-9e52514608eb-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.486083 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" event={"ID":"b58e81ba-bde3-4a48-b2b6-9e52514608eb","Type":"ContainerDied","Data":"e90b34c3e0899a35f8716d1fbc72c58b754399b602d25db8c9b44ae1c16056c3"} Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.486145 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fkfg4" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.486155 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90b34c3e0899a35f8716d1fbc72c58b754399b602d25db8c9b44ae1c16056c3" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.609289 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp"] Mar 13 21:11:37 crc kubenswrapper[5029]: E0313 21:11:37.610228 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58e81ba-bde3-4a48-b2b6-9e52514608eb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.610254 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58e81ba-bde3-4a48-b2b6-9e52514608eb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 13 21:11:37 crc kubenswrapper[5029]: E0313 21:11:37.610330 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048deb4f-2208-41fd-93b3-210f84bc8203" containerName="oc" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.610339 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="048deb4f-2208-41fd-93b3-210f84bc8203" containerName="oc" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.610550 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58e81ba-bde3-4a48-b2b6-9e52514608eb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.610573 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="048deb4f-2208-41fd-93b3-210f84bc8203" containerName="oc" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.611392 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.614091 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.615933 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.616066 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.616121 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.616917 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ws76m" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.632323 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp"] Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.640771 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.640834 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf97g\" (UniqueName: \"kubernetes.io/projected/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-kube-api-access-rf97g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.640917 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.640943 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.640994 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.641221 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.641418 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.752570 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.752682 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf97g\" (UniqueName: \"kubernetes.io/projected/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-kube-api-access-rf97g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.752735 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.752782 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.752843 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.753512 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.753601 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.758557 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.762134 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.763476 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.763778 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.765422 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.767068 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.787594 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf97g\" (UniqueName: \"kubernetes.io/projected/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-kube-api-access-rf97g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:37 crc kubenswrapper[5029]: I0313 21:11:37.944006 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:11:38 crc kubenswrapper[5029]: I0313 21:11:38.567964 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp"] Mar 13 21:11:39 crc kubenswrapper[5029]: I0313 21:11:39.506358 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" event={"ID":"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7","Type":"ContainerStarted","Data":"3c0f510b8b10d423a1be02a0f36d920b6adcb31aaaa20b95a8762a09cd436f47"} Mar 13 21:11:39 crc kubenswrapper[5029]: I0313 21:11:39.506701 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" event={"ID":"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7","Type":"ContainerStarted","Data":"aaae1a1e3fb4f39841543c996718154cb78e2fe2910f386d76a1520b9a8336a5"} Mar 13 21:11:39 crc kubenswrapper[5029]: I0313 21:11:39.538694 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" podStartSLOduration=2.014728292 podStartE2EDuration="2.538669807s" podCreationTimestamp="2026-03-13 21:11:37 +0000 UTC" firstStartedPulling="2026-03-13 21:11:38.579503403 +0000 UTC m=+2658.595585806" lastFinishedPulling="2026-03-13 21:11:39.103444918 +0000 UTC m=+2659.119527321" observedRunningTime="2026-03-13 21:11:39.529232314 +0000 UTC m=+2659.545314717" watchObservedRunningTime="2026-03-13 21:11:39.538669807 +0000 UTC m=+2659.554752210" Mar 13 21:11:40 crc kubenswrapper[5029]: I0313 21:11:40.606879 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:11:40 crc kubenswrapper[5029]: E0313 21:11:40.607734 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.491736 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wcvrc"] Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.494253 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.511772 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wcvrc"] Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.586930 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqptc\" (UniqueName: \"kubernetes.io/projected/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-kube-api-access-dqptc\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.587014 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-catalog-content\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.587072 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-utilities\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.689410 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqptc\" (UniqueName: \"kubernetes.io/projected/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-kube-api-access-dqptc\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.689490 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-catalog-content\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.689546 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-utilities\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.690061 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-utilities\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.690135 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-catalog-content\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.711399 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqptc\" (UniqueName: \"kubernetes.io/projected/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-kube-api-access-dqptc\") pod \"certified-operators-wcvrc\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:43 crc kubenswrapper[5029]: I0313 21:11:43.827449 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:44 crc kubenswrapper[5029]: I0313 21:11:44.129202 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wcvrc"] Mar 13 21:11:44 crc kubenswrapper[5029]: I0313 21:11:44.554412 5029 generic.go:334] "Generic (PLEG): container finished" podID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerID="a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811" exitCode=0 Mar 13 21:11:44 crc kubenswrapper[5029]: I0313 21:11:44.554499 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcvrc" event={"ID":"d1a3ef3e-8343-43fc-b25a-882a5e516bc9","Type":"ContainerDied","Data":"a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811"} Mar 13 21:11:44 crc kubenswrapper[5029]: I0313 21:11:44.554931 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcvrc" event={"ID":"d1a3ef3e-8343-43fc-b25a-882a5e516bc9","Type":"ContainerStarted","Data":"269029cfffe6adfefde206a9ed363086293a455f73ff478875444da73f9dd226"} Mar 13 21:11:45 crc kubenswrapper[5029]: I0313 21:11:45.569393 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcvrc" event={"ID":"d1a3ef3e-8343-43fc-b25a-882a5e516bc9","Type":"ContainerStarted","Data":"8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47"} Mar 13 21:11:46 crc kubenswrapper[5029]: I0313 21:11:46.582929 5029 generic.go:334] "Generic (PLEG): container finished" podID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerID="8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47" exitCode=0 Mar 13 21:11:46 crc kubenswrapper[5029]: I0313 21:11:46.583056 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcvrc" event={"ID":"d1a3ef3e-8343-43fc-b25a-882a5e516bc9","Type":"ContainerDied","Data":"8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47"} Mar 13 21:11:47 crc kubenswrapper[5029]: I0313 21:11:47.595278 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcvrc" event={"ID":"d1a3ef3e-8343-43fc-b25a-882a5e516bc9","Type":"ContainerStarted","Data":"9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab"} Mar 13 21:11:47 crc kubenswrapper[5029]: I0313 21:11:47.619085 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wcvrc" podStartSLOduration=2.131057324 podStartE2EDuration="4.619056291s" podCreationTimestamp="2026-03-13 21:11:43 +0000 UTC" firstStartedPulling="2026-03-13 21:11:44.55587684 +0000 UTC m=+2664.571959243" lastFinishedPulling="2026-03-13 21:11:47.043875807 +0000 UTC m=+2667.059958210" observedRunningTime="2026-03-13 21:11:47.613037072 +0000 UTC m=+2667.629119485" watchObservedRunningTime="2026-03-13 21:11:47.619056291 +0000 UTC m=+2667.635138694" Mar 13 21:11:51 crc kubenswrapper[5029]: I0313 21:11:51.599720 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:11:51 crc kubenswrapper[5029]: E0313 21:11:51.600612 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:11:53 crc kubenswrapper[5029]: I0313 21:11:53.827792 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:53 crc kubenswrapper[5029]: I0313 21:11:53.828164 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:53 crc kubenswrapper[5029]: I0313 21:11:53.874466 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:54 crc kubenswrapper[5029]: I0313 21:11:54.742832 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:54 crc kubenswrapper[5029]: I0313 21:11:54.793780 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wcvrc"] Mar 13 21:11:56 crc kubenswrapper[5029]: I0313 21:11:56.718775 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wcvrc" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerName="registry-server" containerID="cri-o://9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab" gracePeriod=2 Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.329675 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.337710 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqptc\" (UniqueName: \"kubernetes.io/projected/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-kube-api-access-dqptc\") pod \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.337915 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-catalog-content\") pod \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.338043 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-utilities\") pod \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\" (UID: \"d1a3ef3e-8343-43fc-b25a-882a5e516bc9\") " Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.338738 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-utilities" (OuterVolumeSpecName: "utilities") pod "d1a3ef3e-8343-43fc-b25a-882a5e516bc9" (UID: "d1a3ef3e-8343-43fc-b25a-882a5e516bc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.343097 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-kube-api-access-dqptc" (OuterVolumeSpecName: "kube-api-access-dqptc") pod "d1a3ef3e-8343-43fc-b25a-882a5e516bc9" (UID: "d1a3ef3e-8343-43fc-b25a-882a5e516bc9"). InnerVolumeSpecName "kube-api-access-dqptc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.405146 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1a3ef3e-8343-43fc-b25a-882a5e516bc9" (UID: "d1a3ef3e-8343-43fc-b25a-882a5e516bc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.441059 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqptc\" (UniqueName: \"kubernetes.io/projected/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-kube-api-access-dqptc\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.441103 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.441119 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a3ef3e-8343-43fc-b25a-882a5e516bc9-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.938313 5029 generic.go:334] "Generic (PLEG): container finished" podID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerID="9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab" exitCode=0 Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.938399 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wcvrc" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.938447 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcvrc" event={"ID":"d1a3ef3e-8343-43fc-b25a-882a5e516bc9","Type":"ContainerDied","Data":"9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab"} Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.938925 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcvrc" event={"ID":"d1a3ef3e-8343-43fc-b25a-882a5e516bc9","Type":"ContainerDied","Data":"269029cfffe6adfefde206a9ed363086293a455f73ff478875444da73f9dd226"} Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.938959 5029 scope.go:117] "RemoveContainer" containerID="9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.961636 5029 scope.go:117] "RemoveContainer" containerID="8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47" Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.985698 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wcvrc"] Mar 13 21:11:58 crc kubenswrapper[5029]: I0313 21:11:58.999129 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wcvrc"] Mar 13 21:11:59 crc kubenswrapper[5029]: I0313 21:11:59.011974 5029 scope.go:117] "RemoveContainer" containerID="a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811" Mar 13 21:11:59 crc kubenswrapper[5029]: I0313 21:11:59.043312 5029 scope.go:117] "RemoveContainer" containerID="9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab" Mar 13 21:11:59 crc kubenswrapper[5029]: E0313 21:11:59.043892 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab\": container with ID starting with 9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab not found: ID does not exist" containerID="9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab" Mar 13 21:11:59 crc kubenswrapper[5029]: I0313 21:11:59.044044 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab"} err="failed to get container status \"9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab\": rpc error: code = NotFound desc = could not find container \"9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab\": container with ID starting with 9dc6ea80357416a260dfdddc049e1dbff1fe130c62991d4ac2bee4f86276ccab not found: ID does not exist" Mar 13 21:11:59 crc kubenswrapper[5029]: I0313 21:11:59.044214 5029 scope.go:117] "RemoveContainer" containerID="8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47" Mar 13 21:11:59 crc kubenswrapper[5029]: E0313 21:11:59.044691 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47\": container with ID starting with 8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47 not found: ID does not exist" containerID="8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47" Mar 13 21:11:59 crc kubenswrapper[5029]: I0313 21:11:59.044773 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47"} err="failed to get container status \"8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47\": rpc error: code = NotFound desc = could not find container \"8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47\": container with ID starting with 8cc0a26624e5a32fb07f7e050b6d63a60c290287a8f4f6a13682fdb25c197a47 not found: ID does not exist" Mar 13 21:11:59 crc kubenswrapper[5029]: I0313 21:11:59.044826 5029 scope.go:117] "RemoveContainer" containerID="a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811" Mar 13 21:11:59 crc kubenswrapper[5029]: E0313 21:11:59.045260 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811\": container with ID starting with a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811 not found: ID does not exist" containerID="a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811" Mar 13 21:11:59 crc kubenswrapper[5029]: I0313 21:11:59.045291 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811"} err="failed to get container status \"a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811\": rpc error: code = NotFound desc = could not find container \"a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811\": container with ID starting with a1ed50c3f08cf73c74f534a91b476659ea6e4a87515d5da922aa2a31f28a2811 not found: ID does not exist" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.150391 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557272-2j7hk"] Mar 13 21:12:00 crc kubenswrapper[5029]: E0313 21:12:00.151135 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerName="extract-utilities" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.151151 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerName="extract-utilities" Mar 13 21:12:00 crc kubenswrapper[5029]: E0313 21:12:00.151175 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerName="extract-content" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.151181 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerName="extract-content" Mar 13 21:12:00 crc kubenswrapper[5029]: E0313 21:12:00.151203 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerName="registry-server" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.151209 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerName="registry-server" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.151409 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" containerName="registry-server" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.152320 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557272-2j7hk" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.156702 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.156880 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.157777 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.162026 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557272-2j7hk"] Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.189500 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnv6n\" (UniqueName: \"kubernetes.io/projected/713e3ec7-dbc6-44b7-81b3-36043df72d54-kube-api-access-cnv6n\") pod \"auto-csr-approver-29557272-2j7hk\" (UID: \"713e3ec7-dbc6-44b7-81b3-36043df72d54\") " pod="openshift-infra/auto-csr-approver-29557272-2j7hk" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.293212 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnv6n\" (UniqueName: \"kubernetes.io/projected/713e3ec7-dbc6-44b7-81b3-36043df72d54-kube-api-access-cnv6n\") pod \"auto-csr-approver-29557272-2j7hk\" (UID: \"713e3ec7-dbc6-44b7-81b3-36043df72d54\") " pod="openshift-infra/auto-csr-approver-29557272-2j7hk" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.317932 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnv6n\" (UniqueName: \"kubernetes.io/projected/713e3ec7-dbc6-44b7-81b3-36043df72d54-kube-api-access-cnv6n\") pod \"auto-csr-approver-29557272-2j7hk\" (UID: \"713e3ec7-dbc6-44b7-81b3-36043df72d54\") " pod="openshift-infra/auto-csr-approver-29557272-2j7hk" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.475282 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557272-2j7hk" Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.633915 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a3ef3e-8343-43fc-b25a-882a5e516bc9" path="/var/lib/kubelet/pods/d1a3ef3e-8343-43fc-b25a-882a5e516bc9/volumes" Mar 13 21:12:00 crc kubenswrapper[5029]: W0313 21:12:00.948692 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod713e3ec7_dbc6_44b7_81b3_36043df72d54.slice/crio-b13fda8bd3a485bd432f269a0ea8cd474a7363c671775cf47fe4801724810a61 WatchSource:0}: Error finding container b13fda8bd3a485bd432f269a0ea8cd474a7363c671775cf47fe4801724810a61: Status 404 returned error can't find the container with id b13fda8bd3a485bd432f269a0ea8cd474a7363c671775cf47fe4801724810a61 Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.954737 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557272-2j7hk"] Mar 13 21:12:00 crc kubenswrapper[5029]: I0313 21:12:00.963331 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557272-2j7hk" event={"ID":"713e3ec7-dbc6-44b7-81b3-36043df72d54","Type":"ContainerStarted","Data":"b13fda8bd3a485bd432f269a0ea8cd474a7363c671775cf47fe4801724810a61"} Mar 13 21:12:02 crc kubenswrapper[5029]: I0313 21:12:02.986015 5029 generic.go:334] "Generic (PLEG): container finished" podID="713e3ec7-dbc6-44b7-81b3-36043df72d54" containerID="f92fbd6927377431aa623fedb2fbccb171b557a645a63d6c863d587e1342d679" exitCode=0 Mar 13 21:12:02 crc kubenswrapper[5029]: I0313 21:12:02.986239 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557272-2j7hk" event={"ID":"713e3ec7-dbc6-44b7-81b3-36043df72d54","Type":"ContainerDied","Data":"f92fbd6927377431aa623fedb2fbccb171b557a645a63d6c863d587e1342d679"} Mar 13 21:12:04 crc kubenswrapper[5029]: I0313 21:12:04.346760 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557272-2j7hk" Mar 13 21:12:04 crc kubenswrapper[5029]: I0313 21:12:04.397621 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnv6n\" (UniqueName: \"kubernetes.io/projected/713e3ec7-dbc6-44b7-81b3-36043df72d54-kube-api-access-cnv6n\") pod \"713e3ec7-dbc6-44b7-81b3-36043df72d54\" (UID: \"713e3ec7-dbc6-44b7-81b3-36043df72d54\") " Mar 13 21:12:04 crc kubenswrapper[5029]: I0313 21:12:04.405190 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713e3ec7-dbc6-44b7-81b3-36043df72d54-kube-api-access-cnv6n" (OuterVolumeSpecName: "kube-api-access-cnv6n") pod "713e3ec7-dbc6-44b7-81b3-36043df72d54" (UID: "713e3ec7-dbc6-44b7-81b3-36043df72d54"). InnerVolumeSpecName "kube-api-access-cnv6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:12:04 crc kubenswrapper[5029]: I0313 21:12:04.504427 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnv6n\" (UniqueName: \"kubernetes.io/projected/713e3ec7-dbc6-44b7-81b3-36043df72d54-kube-api-access-cnv6n\") on node \"crc\" DevicePath \"\"" Mar 13 21:12:05 crc kubenswrapper[5029]: I0313 21:12:05.016215 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557272-2j7hk" event={"ID":"713e3ec7-dbc6-44b7-81b3-36043df72d54","Type":"ContainerDied","Data":"b13fda8bd3a485bd432f269a0ea8cd474a7363c671775cf47fe4801724810a61"} Mar 13 21:12:05 crc kubenswrapper[5029]: I0313 21:12:05.017318 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b13fda8bd3a485bd432f269a0ea8cd474a7363c671775cf47fe4801724810a61" Mar 13 21:12:05 crc kubenswrapper[5029]: I0313 21:12:05.016600 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557272-2j7hk" Mar 13 21:12:05 crc kubenswrapper[5029]: I0313 21:12:05.429044 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557266-7qt9r"] Mar 13 21:12:05 crc kubenswrapper[5029]: I0313 21:12:05.441005 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557266-7qt9r"] Mar 13 21:12:06 crc kubenswrapper[5029]: I0313 21:12:06.600300 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:12:06 crc kubenswrapper[5029]: E0313 21:12:06.601131 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:12:06 crc kubenswrapper[5029]: I0313 21:12:06.615697 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b3b9048-9bad-4203-bf18-7514da4c4d36" path="/var/lib/kubelet/pods/4b3b9048-9bad-4203-bf18-7514da4c4d36/volumes" Mar 13 21:12:12 crc kubenswrapper[5029]: I0313 21:12:12.547723 5029 scope.go:117] "RemoveContainer" containerID="e29d9c30aff95abe82ae1de64cd6d5f10803bcd8911b50c3dc83f20fddf5d1c0" Mar 13 21:12:19 crc kubenswrapper[5029]: I0313 21:12:19.600238 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:12:19 crc kubenswrapper[5029]: E0313 21:12:19.601311 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:12:30 crc kubenswrapper[5029]: I0313 21:12:30.606496 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:12:30 crc kubenswrapper[5029]: E0313 21:12:30.609244 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:12:42 crc kubenswrapper[5029]: I0313 21:12:42.599809 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:12:42 crc kubenswrapper[5029]: E0313 21:12:42.600665 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:12:54 crc kubenswrapper[5029]: I0313 21:12:54.599985 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:12:54 crc kubenswrapper[5029]: E0313 21:12:54.601366 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:13:07 crc kubenswrapper[5029]: I0313 21:13:07.599673 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:13:07 crc kubenswrapper[5029]: E0313 21:13:07.600786 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:13:19 crc kubenswrapper[5029]: I0313 21:13:19.600639 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:13:19 crc kubenswrapper[5029]: E0313 21:13:19.601752 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:13:34 crc kubenswrapper[5029]: I0313 21:13:34.601806 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:13:34 crc kubenswrapper[5029]: E0313 21:13:34.602772 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:13:45 crc kubenswrapper[5029]: I0313 21:13:45.600349 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:13:45 crc kubenswrapper[5029]: E0313 21:13:45.601948 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.699302 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x2z4z"] Mar 13 21:13:52 crc kubenswrapper[5029]: E0313 21:13:52.701175 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713e3ec7-dbc6-44b7-81b3-36043df72d54" containerName="oc" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.701201 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="713e3ec7-dbc6-44b7-81b3-36043df72d54" containerName="oc" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.701423 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="713e3ec7-dbc6-44b7-81b3-36043df72d54" containerName="oc" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.705190 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.719372 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2z4z"] Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.840657 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7k86\" (UniqueName: \"kubernetes.io/projected/a8b93d35-ffb7-478e-84e4-c69ca9043957-kube-api-access-n7k86\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.841100 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-catalog-content\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.841268 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-utilities\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.943955 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7k86\" (UniqueName: \"kubernetes.io/projected/a8b93d35-ffb7-478e-84e4-c69ca9043957-kube-api-access-n7k86\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.944171 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-catalog-content\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.944226 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-utilities\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.944751 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-utilities\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.944883 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-catalog-content\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:52 crc kubenswrapper[5029]: I0313 21:13:52.970174 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7k86\" (UniqueName: \"kubernetes.io/projected/a8b93d35-ffb7-478e-84e4-c69ca9043957-kube-api-access-n7k86\") pod \"redhat-operators-x2z4z\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:53 crc kubenswrapper[5029]: I0313 21:13:53.041947 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:13:53 crc kubenswrapper[5029]: I0313 21:13:53.641095 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2z4z"] Mar 13 21:13:54 crc kubenswrapper[5029]: I0313 21:13:54.094162 5029 generic.go:334] "Generic (PLEG): container finished" podID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerID="61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4" exitCode=0 Mar 13 21:13:54 crc kubenswrapper[5029]: I0313 21:13:54.094206 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2z4z" event={"ID":"a8b93d35-ffb7-478e-84e4-c69ca9043957","Type":"ContainerDied","Data":"61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4"} Mar 13 21:13:54 crc kubenswrapper[5029]: I0313 21:13:54.094497 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2z4z" event={"ID":"a8b93d35-ffb7-478e-84e4-c69ca9043957","Type":"ContainerStarted","Data":"65267f2a37fc2f375d5497d5fdcf5ab596eb9be4c2c670907013069f56599040"} Mar 13 21:13:56 crc kubenswrapper[5029]: I0313 21:13:56.113755 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2z4z" event={"ID":"a8b93d35-ffb7-478e-84e4-c69ca9043957","Type":"ContainerStarted","Data":"904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731"} Mar 13 21:13:57 crc kubenswrapper[5029]: I0313 21:13:57.599711 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:13:57 crc kubenswrapper[5029]: E0313 21:13:57.600370 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.147420 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557274-nxg42"] Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.149605 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557274-nxg42" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.153178 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.153298 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.154308 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.161299 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557274-nxg42"] Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.216504 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmqf\" (UniqueName: \"kubernetes.io/projected/496521b2-1479-451a-a065-b8609d0eac95-kube-api-access-ksmqf\") pod \"auto-csr-approver-29557274-nxg42\" (UID: \"496521b2-1479-451a-a065-b8609d0eac95\") " pod="openshift-infra/auto-csr-approver-29557274-nxg42" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.318660 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmqf\" (UniqueName: \"kubernetes.io/projected/496521b2-1479-451a-a065-b8609d0eac95-kube-api-access-ksmqf\") pod \"auto-csr-approver-29557274-nxg42\" (UID: \"496521b2-1479-451a-a065-b8609d0eac95\") " pod="openshift-infra/auto-csr-approver-29557274-nxg42" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.349298 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmqf\" (UniqueName: \"kubernetes.io/projected/496521b2-1479-451a-a065-b8609d0eac95-kube-api-access-ksmqf\") pod \"auto-csr-approver-29557274-nxg42\" (UID: \"496521b2-1479-451a-a065-b8609d0eac95\") " pod="openshift-infra/auto-csr-approver-29557274-nxg42" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.476401 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557274-nxg42" Mar 13 21:14:00 crc kubenswrapper[5029]: I0313 21:14:00.927635 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557274-nxg42"] Mar 13 21:14:01 crc kubenswrapper[5029]: I0313 21:14:01.168029 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557274-nxg42" event={"ID":"496521b2-1479-451a-a065-b8609d0eac95","Type":"ContainerStarted","Data":"d4fd92c60394ca25b04fccd17fff26ebc72d314f53e18e79d10e8673b56512b0"} Mar 13 21:14:03 crc kubenswrapper[5029]: I0313 21:14:03.194473 5029 generic.go:334] "Generic (PLEG): container finished" podID="496521b2-1479-451a-a065-b8609d0eac95" containerID="6f70965bf0f49466d266f5c261ca899d3ed3146c20e6643eb2c472e754a06397" exitCode=0 Mar 13 21:14:03 crc kubenswrapper[5029]: I0313 21:14:03.195008 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557274-nxg42" event={"ID":"496521b2-1479-451a-a065-b8609d0eac95","Type":"ContainerDied","Data":"6f70965bf0f49466d266f5c261ca899d3ed3146c20e6643eb2c472e754a06397"} Mar 13 21:14:03 crc kubenswrapper[5029]: I0313 21:14:03.197574 5029 generic.go:334] "Generic (PLEG): container finished" podID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerID="904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731" exitCode=0 Mar 13 21:14:03 crc kubenswrapper[5029]: I0313 21:14:03.197615 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2z4z" event={"ID":"a8b93d35-ffb7-478e-84e4-c69ca9043957","Type":"ContainerDied","Data":"904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731"} Mar 13 21:14:04 crc kubenswrapper[5029]: I0313 21:14:04.211193 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2z4z" event={"ID":"a8b93d35-ffb7-478e-84e4-c69ca9043957","Type":"ContainerStarted","Data":"9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71"} Mar 13 21:14:04 crc kubenswrapper[5029]: I0313 21:14:04.250544 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x2z4z" podStartSLOduration=2.686357411 podStartE2EDuration="12.250511762s" podCreationTimestamp="2026-03-13 21:13:52 +0000 UTC" firstStartedPulling="2026-03-13 21:13:54.096466228 +0000 UTC m=+2794.112548631" lastFinishedPulling="2026-03-13 21:14:03.660620579 +0000 UTC m=+2803.676702982" observedRunningTime="2026-03-13 21:14:04.237533797 +0000 UTC m=+2804.253616210" watchObservedRunningTime="2026-03-13 21:14:04.250511762 +0000 UTC m=+2804.266594175" Mar 13 21:14:04 crc kubenswrapper[5029]: I0313 21:14:04.550298 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557274-nxg42" Mar 13 21:14:04 crc kubenswrapper[5029]: I0313 21:14:04.612273 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmqf\" (UniqueName: \"kubernetes.io/projected/496521b2-1479-451a-a065-b8609d0eac95-kube-api-access-ksmqf\") pod \"496521b2-1479-451a-a065-b8609d0eac95\" (UID: \"496521b2-1479-451a-a065-b8609d0eac95\") " Mar 13 21:14:04 crc kubenswrapper[5029]: I0313 21:14:04.619172 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496521b2-1479-451a-a065-b8609d0eac95-kube-api-access-ksmqf" (OuterVolumeSpecName: "kube-api-access-ksmqf") pod "496521b2-1479-451a-a065-b8609d0eac95" (UID: "496521b2-1479-451a-a065-b8609d0eac95"). InnerVolumeSpecName "kube-api-access-ksmqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:14:04 crc kubenswrapper[5029]: I0313 21:14:04.715692 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksmqf\" (UniqueName: \"kubernetes.io/projected/496521b2-1479-451a-a065-b8609d0eac95-kube-api-access-ksmqf\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:05 crc kubenswrapper[5029]: I0313 21:14:05.222394 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557274-nxg42" event={"ID":"496521b2-1479-451a-a065-b8609d0eac95","Type":"ContainerDied","Data":"d4fd92c60394ca25b04fccd17fff26ebc72d314f53e18e79d10e8673b56512b0"} Mar 13 21:14:05 crc kubenswrapper[5029]: I0313 21:14:05.222448 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4fd92c60394ca25b04fccd17fff26ebc72d314f53e18e79d10e8673b56512b0" Mar 13 21:14:05 crc kubenswrapper[5029]: I0313 21:14:05.222582 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557274-nxg42" Mar 13 21:14:05 crc kubenswrapper[5029]: I0313 21:14:05.640701 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557268-5fw2s"] Mar 13 21:14:05 crc kubenswrapper[5029]: I0313 21:14:05.652049 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557268-5fw2s"] Mar 13 21:14:06 crc kubenswrapper[5029]: I0313 21:14:06.234869 5029 generic.go:334] "Generic (PLEG): container finished" podID="ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" containerID="3c0f510b8b10d423a1be02a0f36d920b6adcb31aaaa20b95a8762a09cd436f47" exitCode=0 Mar 13 21:14:06 crc kubenswrapper[5029]: I0313 21:14:06.234884 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" event={"ID":"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7","Type":"ContainerDied","Data":"3c0f510b8b10d423a1be02a0f36d920b6adcb31aaaa20b95a8762a09cd436f47"} Mar 13 21:14:06 crc kubenswrapper[5029]: I0313 21:14:06.614520 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec600491-3bd7-45bb-8015-36719d4a56cd" path="/var/lib/kubelet/pods/ec600491-3bd7-45bb-8015-36719d4a56cd/volumes" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.752555 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.893687 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-0\") pod \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.894121 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-2\") pod \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.894149 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-inventory\") pod \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.894178 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-1\") pod \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.894223 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ssh-key-openstack-edpm-ipam\") pod \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.894279 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-telemetry-combined-ca-bundle\") pod \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.894349 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf97g\" (UniqueName: \"kubernetes.io/projected/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-kube-api-access-rf97g\") pod \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\" (UID: \"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7\") " Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.901632 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-kube-api-access-rf97g" (OuterVolumeSpecName: "kube-api-access-rf97g") pod "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" (UID: "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7"). InnerVolumeSpecName "kube-api-access-rf97g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.902479 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" (UID: "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.927476 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" (UID: "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.933475 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" (UID: "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.935900 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-inventory" (OuterVolumeSpecName: "inventory") pod "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" (UID: "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.951060 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" (UID: "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.951507 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" (UID: "ee60ebd2-90a0-4b71-96e5-01348f8c7ba7"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.997774 5029 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.997819 5029 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.997829 5029 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-inventory\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.997839 5029 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.997867 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.997877 5029 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:07 crc kubenswrapper[5029]: I0313 21:14:07.997887 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf97g\" (UniqueName: \"kubernetes.io/projected/ee60ebd2-90a0-4b71-96e5-01348f8c7ba7-kube-api-access-rf97g\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:08 crc kubenswrapper[5029]: I0313 21:14:08.253013 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" event={"ID":"ee60ebd2-90a0-4b71-96e5-01348f8c7ba7","Type":"ContainerDied","Data":"aaae1a1e3fb4f39841543c996718154cb78e2fe2910f386d76a1520b9a8336a5"} Mar 13 21:14:08 crc kubenswrapper[5029]: I0313 21:14:08.253056 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaae1a1e3fb4f39841543c996718154cb78e2fe2910f386d76a1520b9a8336a5" Mar 13 21:14:08 crc kubenswrapper[5029]: I0313 21:14:08.253095 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp" Mar 13 21:14:09 crc kubenswrapper[5029]: I0313 21:14:09.599328 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:14:10 crc kubenswrapper[5029]: I0313 21:14:10.274737 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"cbc3255f6dca2b689804649d6ad92dd64d96b59e7450a9a1b5ec9bc9251f2fa4"} Mar 13 21:14:12 crc kubenswrapper[5029]: I0313 21:14:12.720120 5029 scope.go:117] "RemoveContainer" containerID="d6b647e54d957f31673d6eaa295d3f5b3a297626ef55cca04b5c01eb49144471" Mar 13 21:14:13 crc kubenswrapper[5029]: I0313 21:14:13.042997 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:14:13 crc kubenswrapper[5029]: I0313 21:14:13.043045 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:14:13 crc kubenswrapper[5029]: I0313 21:14:13.096155 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:14:13 crc kubenswrapper[5029]: I0313 21:14:13.349966 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:14:14 crc kubenswrapper[5029]: I0313 21:14:14.231112 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2z4z"] Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.322076 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x2z4z" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerName="registry-server" containerID="cri-o://9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71" gracePeriod=2 Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.834416 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.867292 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-utilities\") pod \"a8b93d35-ffb7-478e-84e4-c69ca9043957\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.867528 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7k86\" (UniqueName: \"kubernetes.io/projected/a8b93d35-ffb7-478e-84e4-c69ca9043957-kube-api-access-n7k86\") pod \"a8b93d35-ffb7-478e-84e4-c69ca9043957\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.867661 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-catalog-content\") pod \"a8b93d35-ffb7-478e-84e4-c69ca9043957\" (UID: \"a8b93d35-ffb7-478e-84e4-c69ca9043957\") " Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.877306 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-utilities" (OuterVolumeSpecName: "utilities") pod "a8b93d35-ffb7-478e-84e4-c69ca9043957" (UID: "a8b93d35-ffb7-478e-84e4-c69ca9043957"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.881393 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b93d35-ffb7-478e-84e4-c69ca9043957-kube-api-access-n7k86" (OuterVolumeSpecName: "kube-api-access-n7k86") pod "a8b93d35-ffb7-478e-84e4-c69ca9043957" (UID: "a8b93d35-ffb7-478e-84e4-c69ca9043957"). InnerVolumeSpecName "kube-api-access-n7k86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.971133 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:15 crc kubenswrapper[5029]: I0313 21:14:15.971190 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7k86\" (UniqueName: \"kubernetes.io/projected/a8b93d35-ffb7-478e-84e4-c69ca9043957-kube-api-access-n7k86\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.043293 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8b93d35-ffb7-478e-84e4-c69ca9043957" (UID: "a8b93d35-ffb7-478e-84e4-c69ca9043957"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.073588 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b93d35-ffb7-478e-84e4-c69ca9043957-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.335832 5029 generic.go:334] "Generic (PLEG): container finished" podID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerID="9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71" exitCode=0 Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.335945 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2z4z" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.335952 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2z4z" event={"ID":"a8b93d35-ffb7-478e-84e4-c69ca9043957","Type":"ContainerDied","Data":"9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71"} Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.336502 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2z4z" event={"ID":"a8b93d35-ffb7-478e-84e4-c69ca9043957","Type":"ContainerDied","Data":"65267f2a37fc2f375d5497d5fdcf5ab596eb9be4c2c670907013069f56599040"} Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.336548 5029 scope.go:117] "RemoveContainer" containerID="9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.375316 5029 scope.go:117] "RemoveContainer" containerID="904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.377087 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2z4z"] Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.397735 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x2z4z"] Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.399996 5029 scope.go:117] "RemoveContainer" containerID="61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.459270 5029 scope.go:117] "RemoveContainer" containerID="9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71" Mar 13 21:14:16 crc kubenswrapper[5029]: E0313 21:14:16.459923 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71\": container with ID starting with 9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71 not found: ID does not exist" containerID="9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.459967 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71"} err="failed to get container status \"9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71\": rpc error: code = NotFound desc = could not find container \"9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71\": container with ID starting with 9526d6b87ec468aa8fbf890fca0735030bcd4e6c1b63f232ff4af4751c9bdb71 not found: ID does not exist" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.460018 5029 scope.go:117] "RemoveContainer" containerID="904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731" Mar 13 21:14:16 crc kubenswrapper[5029]: E0313 21:14:16.460510 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731\": container with ID starting with 904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731 not found: ID does not exist" containerID="904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.460540 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731"} err="failed to get container status \"904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731\": rpc error: code = NotFound desc = could not find container \"904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731\": container with ID starting with 904d7f06162b0caeb4e5cc74375b533b797472ba6803e21c43c0bdf08e480731 not found: ID does not exist" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.460562 5029 scope.go:117] "RemoveContainer" containerID="61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4" Mar 13 21:14:16 crc kubenswrapper[5029]: E0313 21:14:16.460955 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4\": container with ID starting with 61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4 not found: ID does not exist" containerID="61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.461013 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4"} err="failed to get container status \"61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4\": rpc error: code = NotFound desc = could not find container \"61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4\": container with ID starting with 61baba30d266e80a46d8895d5c27520502f95b7842058a11c253808aa04a87d4 not found: ID does not exist" Mar 13 21:14:16 crc kubenswrapper[5029]: I0313 21:14:16.613523 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" path="/var/lib/kubelet/pods/a8b93d35-ffb7-478e-84e4-c69ca9043957/volumes" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.153513 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt"] Mar 13 21:15:00 crc kubenswrapper[5029]: E0313 21:15:00.154551 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496521b2-1479-451a-a065-b8609d0eac95" containerName="oc" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.154570 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="496521b2-1479-451a-a065-b8609d0eac95" containerName="oc" Mar 13 21:15:00 crc kubenswrapper[5029]: E0313 21:15:00.154587 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerName="extract-content" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.154595 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerName="extract-content" Mar 13 21:15:00 crc kubenswrapper[5029]: E0313 21:15:00.154632 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerName="extract-utilities" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.154643 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerName="extract-utilities" Mar 13 21:15:00 crc kubenswrapper[5029]: E0313 21:15:00.154680 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.154688 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 13 21:15:00 crc kubenswrapper[5029]: E0313 21:15:00.154708 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerName="registry-server" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.154714 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerName="registry-server" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.156603 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee60ebd2-90a0-4b71-96e5-01348f8c7ba7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.156636 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="496521b2-1479-451a-a065-b8609d0eac95" containerName="oc" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.156674 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b93d35-ffb7-478e-84e4-c69ca9043957" containerName="registry-server" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.157907 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.160596 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.161204 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.169278 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt"] Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.206346 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63d53327-dc50-4621-95a2-1b17821475f5-config-volume\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.206435 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63d53327-dc50-4621-95a2-1b17821475f5-secret-volume\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.206755 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8kwr\" (UniqueName: \"kubernetes.io/projected/63d53327-dc50-4621-95a2-1b17821475f5-kube-api-access-h8kwr\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.308999 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63d53327-dc50-4621-95a2-1b17821475f5-config-volume\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.309064 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63d53327-dc50-4621-95a2-1b17821475f5-secret-volume\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.309109 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8kwr\" (UniqueName: \"kubernetes.io/projected/63d53327-dc50-4621-95a2-1b17821475f5-kube-api-access-h8kwr\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.310870 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63d53327-dc50-4621-95a2-1b17821475f5-config-volume\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.322064 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63d53327-dc50-4621-95a2-1b17821475f5-secret-volume\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.328237 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8kwr\" (UniqueName: \"kubernetes.io/projected/63d53327-dc50-4621-95a2-1b17821475f5-kube-api-access-h8kwr\") pod \"collect-profiles-29557275-tvxdt\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.492570 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:00 crc kubenswrapper[5029]: I0313 21:15:00.979338 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt"] Mar 13 21:15:01 crc kubenswrapper[5029]: I0313 21:15:01.801622 5029 generic.go:334] "Generic (PLEG): container finished" podID="63d53327-dc50-4621-95a2-1b17821475f5" containerID="9ec70c8553e2308f8be35ca67ac8a7099011acaf599c45b63563d7f7cd7b2537" exitCode=0 Mar 13 21:15:01 crc kubenswrapper[5029]: I0313 21:15:01.801743 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" event={"ID":"63d53327-dc50-4621-95a2-1b17821475f5","Type":"ContainerDied","Data":"9ec70c8553e2308f8be35ca67ac8a7099011acaf599c45b63563d7f7cd7b2537"} Mar 13 21:15:01 crc kubenswrapper[5029]: I0313 21:15:01.802140 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" event={"ID":"63d53327-dc50-4621-95a2-1b17821475f5","Type":"ContainerStarted","Data":"139ebaab33ddfbd45fe9d0e5162ba7e017b44a89da5b27015a533f5a83a1872a"} Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.255526 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.402185 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63d53327-dc50-4621-95a2-1b17821475f5-config-volume\") pod \"63d53327-dc50-4621-95a2-1b17821475f5\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.402797 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63d53327-dc50-4621-95a2-1b17821475f5-secret-volume\") pod \"63d53327-dc50-4621-95a2-1b17821475f5\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.402983 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8kwr\" (UniqueName: \"kubernetes.io/projected/63d53327-dc50-4621-95a2-1b17821475f5-kube-api-access-h8kwr\") pod \"63d53327-dc50-4621-95a2-1b17821475f5\" (UID: \"63d53327-dc50-4621-95a2-1b17821475f5\") " Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.404192 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d53327-dc50-4621-95a2-1b17821475f5-config-volume" (OuterVolumeSpecName: "config-volume") pod "63d53327-dc50-4621-95a2-1b17821475f5" (UID: "63d53327-dc50-4621-95a2-1b17821475f5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.411481 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d53327-dc50-4621-95a2-1b17821475f5-kube-api-access-h8kwr" (OuterVolumeSpecName: "kube-api-access-h8kwr") pod "63d53327-dc50-4621-95a2-1b17821475f5" (UID: "63d53327-dc50-4621-95a2-1b17821475f5"). InnerVolumeSpecName "kube-api-access-h8kwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.414625 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d53327-dc50-4621-95a2-1b17821475f5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63d53327-dc50-4621-95a2-1b17821475f5" (UID: "63d53327-dc50-4621-95a2-1b17821475f5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.505172 5029 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63d53327-dc50-4621-95a2-1b17821475f5-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.505217 5029 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63d53327-dc50-4621-95a2-1b17821475f5-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.505227 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8kwr\" (UniqueName: \"kubernetes.io/projected/63d53327-dc50-4621-95a2-1b17821475f5-kube-api-access-h8kwr\") on node \"crc\" DevicePath \"\"" Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.826048 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" event={"ID":"63d53327-dc50-4621-95a2-1b17821475f5","Type":"ContainerDied","Data":"139ebaab33ddfbd45fe9d0e5162ba7e017b44a89da5b27015a533f5a83a1872a"} Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.826103 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="139ebaab33ddfbd45fe9d0e5162ba7e017b44a89da5b27015a533f5a83a1872a" Mar 13 21:15:03 crc kubenswrapper[5029]: I0313 21:15:03.826174 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt" Mar 13 21:15:04 crc kubenswrapper[5029]: I0313 21:15:04.364645 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7"] Mar 13 21:15:04 crc kubenswrapper[5029]: I0313 21:15:04.374770 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557230-z7qq7"] Mar 13 21:15:04 crc kubenswrapper[5029]: I0313 21:15:04.619431 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8143251f-c7f9-42a8-a7ad-dfd9d5f87a05" path="/var/lib/kubelet/pods/8143251f-c7f9-42a8-a7ad-dfd9d5f87a05/volumes" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.053624 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 13 21:15:12 crc kubenswrapper[5029]: E0313 21:15:12.054570 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d53327-dc50-4621-95a2-1b17821475f5" containerName="collect-profiles" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.054584 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d53327-dc50-4621-95a2-1b17821475f5" containerName="collect-profiles" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.054806 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d53327-dc50-4621-95a2-1b17821475f5" containerName="collect-profiles" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.055609 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.058236 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.059383 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.061680 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.082023 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.140034 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-config-data\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.140351 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.140660 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.242956 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.243016 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.243073 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.243098 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.243128 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.244546 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.244561 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-config-data\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.244662 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.245048 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.245107 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjn9\" (UniqueName: \"kubernetes.io/projected/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-kube-api-access-fjjn9\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.246899 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-config-data\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.253698 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.347468 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.347918 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.348013 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjjn9\" (UniqueName: \"kubernetes.io/projected/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-kube-api-access-fjjn9\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.348128 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.348251 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.348336 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.348790 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.349058 5029 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.349197 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.354340 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.355162 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.370981 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjjn9\" (UniqueName: \"kubernetes.io/projected/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-kube-api-access-fjjn9\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.388925 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.692239 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 13 21:15:12 crc kubenswrapper[5029]: I0313 21:15:12.812496 5029 scope.go:117] "RemoveContainer" containerID="7baec69ba59d0f99ecff59871af045e3b028b60ec6b590f4197a0324d8177833" Mar 13 21:15:13 crc kubenswrapper[5029]: I0313 21:15:13.239020 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 13 21:15:13 crc kubenswrapper[5029]: I0313 21:15:13.244390 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:15:13 crc kubenswrapper[5029]: I0313 21:15:13.935370 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced","Type":"ContainerStarted","Data":"c38a8855e2ce68d0a08fd0f21360f4c39642cb53570353f053dc459829769486"} Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.323313 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2knnk"] Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.329051 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.356643 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2knnk"] Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.429175 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-catalog-content\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.429670 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-utilities\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.430019 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4m2l\" (UniqueName: \"kubernetes.io/projected/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-kube-api-access-w4m2l\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.532730 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-catalog-content\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.532894 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-utilities\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.532988 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4m2l\" (UniqueName: \"kubernetes.io/projected/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-kube-api-access-w4m2l\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.533687 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-catalog-content\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.533740 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-utilities\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.557300 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4m2l\" (UniqueName: \"kubernetes.io/projected/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-kube-api-access-w4m2l\") pod \"community-operators-2knnk\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:42 crc kubenswrapper[5029]: I0313 21:15:42.656988 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:44 crc kubenswrapper[5029]: E0313 21:15:44.903949 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 13 21:15:44 crc kubenswrapper[5029]: E0313 21:15:44.905404 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjjn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(ac9d86b5-6cef-43ea-90c2-3aebba7f6ced): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 21:15:44 crc kubenswrapper[5029]: E0313 21:15:44.906806 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" Mar 13 21:15:45 crc kubenswrapper[5029]: E0313 21:15:45.309255 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" Mar 13 21:15:45 crc kubenswrapper[5029]: I0313 21:15:45.584115 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2knnk"] Mar 13 21:15:45 crc kubenswrapper[5029]: W0313 21:15:45.649508 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2dfe05_8ff7_4a8c_a672_3ff57b9211d2.slice/crio-1b88f0954f5501ff47cc2142e92f55c5fd41055f490ad1064ccb43d52662bcb0 WatchSource:0}: Error finding container 1b88f0954f5501ff47cc2142e92f55c5fd41055f490ad1064ccb43d52662bcb0: Status 404 returned error can't find the container with id 1b88f0954f5501ff47cc2142e92f55c5fd41055f490ad1064ccb43d52662bcb0 Mar 13 21:15:46 crc kubenswrapper[5029]: I0313 21:15:46.318372 5029 generic.go:334] "Generic (PLEG): container finished" podID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerID="1bdcf42bb92184b1324a556d18c1a156723ede1d3c9da235d451f98f411fec41" exitCode=0 Mar 13 21:15:46 crc kubenswrapper[5029]: I0313 21:15:46.318483 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2knnk" event={"ID":"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2","Type":"ContainerDied","Data":"1bdcf42bb92184b1324a556d18c1a156723ede1d3c9da235d451f98f411fec41"} Mar 13 21:15:46 crc kubenswrapper[5029]: I0313 21:15:46.318884 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2knnk" event={"ID":"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2","Type":"ContainerStarted","Data":"1b88f0954f5501ff47cc2142e92f55c5fd41055f490ad1064ccb43d52662bcb0"} Mar 13 21:15:48 crc kubenswrapper[5029]: I0313 21:15:48.341346 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2knnk" event={"ID":"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2","Type":"ContainerStarted","Data":"edd638fe53972b96d25a068511a646093d9929255f77dcfda2bda76c2a47ba15"} Mar 13 21:15:50 crc kubenswrapper[5029]: I0313 21:15:50.365315 5029 generic.go:334] "Generic (PLEG): container finished" podID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerID="edd638fe53972b96d25a068511a646093d9929255f77dcfda2bda76c2a47ba15" exitCode=0 Mar 13 21:15:50 crc kubenswrapper[5029]: I0313 21:15:50.365381 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2knnk" event={"ID":"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2","Type":"ContainerDied","Data":"edd638fe53972b96d25a068511a646093d9929255f77dcfda2bda76c2a47ba15"} Mar 13 21:15:51 crc kubenswrapper[5029]: I0313 21:15:51.382433 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2knnk" event={"ID":"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2","Type":"ContainerStarted","Data":"6b291de216ab0535b96e3a0b485b29355ae2e8f2a95ef573737ee9f92ffefb2a"} Mar 13 21:15:51 crc kubenswrapper[5029]: I0313 21:15:51.403597 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2knnk" podStartSLOduration=4.737722412 podStartE2EDuration="9.403568692s" podCreationTimestamp="2026-03-13 21:15:42 +0000 UTC" firstStartedPulling="2026-03-13 21:15:46.320785482 +0000 UTC m=+2906.336867885" lastFinishedPulling="2026-03-13 21:15:50.986631762 +0000 UTC m=+2911.002714165" observedRunningTime="2026-03-13 21:15:51.402497513 +0000 UTC m=+2911.418579906" watchObservedRunningTime="2026-03-13 21:15:51.403568692 +0000 UTC m=+2911.419651085" Mar 13 21:15:52 crc kubenswrapper[5029]: I0313 21:15:52.657151 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:52 crc kubenswrapper[5029]: I0313 21:15:52.657738 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:15:53 crc kubenswrapper[5029]: I0313 21:15:53.708345 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2knnk" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="registry-server" probeResult="failure" output=< Mar 13 21:15:53 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 21:15:53 crc kubenswrapper[5029]: > Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.146843 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557276-tnrnc"] Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.149396 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.152085 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.154312 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.154715 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.156994 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557276-tnrnc"] Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.177123 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqh4\" (UniqueName: \"kubernetes.io/projected/93c9968b-ac0a-4a0e-a4b7-bf4af5daa391-kube-api-access-6xqh4\") pod \"auto-csr-approver-29557276-tnrnc\" (UID: \"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391\") " pod="openshift-infra/auto-csr-approver-29557276-tnrnc" Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.279600 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xqh4\" (UniqueName: \"kubernetes.io/projected/93c9968b-ac0a-4a0e-a4b7-bf4af5daa391-kube-api-access-6xqh4\") pod \"auto-csr-approver-29557276-tnrnc\" (UID: \"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391\") " pod="openshift-infra/auto-csr-approver-29557276-tnrnc" Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.303381 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xqh4\" (UniqueName: \"kubernetes.io/projected/93c9968b-ac0a-4a0e-a4b7-bf4af5daa391-kube-api-access-6xqh4\") pod \"auto-csr-approver-29557276-tnrnc\" (UID: \"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391\") " pod="openshift-infra/auto-csr-approver-29557276-tnrnc" Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.472623 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" Mar 13 21:16:00 crc kubenswrapper[5029]: I0313 21:16:00.936477 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557276-tnrnc"] Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.047316 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.486030 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" event={"ID":"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391","Type":"ContainerStarted","Data":"e7b8c59931693ee4cdb7c3743ca091ee3eea0a5ae16165ebc99868b6010bb060"} Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.724730 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rcc45"] Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.727226 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.770449 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcc45"] Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.818999 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-catalog-content\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.819272 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-utilities\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.819581 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdhdt\" (UniqueName: \"kubernetes.io/projected/4b8cf9cf-1ce7-4122-a2df-680357bdb560-kube-api-access-zdhdt\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.922703 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdhdt\" (UniqueName: \"kubernetes.io/projected/4b8cf9cf-1ce7-4122-a2df-680357bdb560-kube-api-access-zdhdt\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.922894 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-catalog-content\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.922923 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-utilities\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.923699 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-utilities\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.925926 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-catalog-content\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:01 crc kubenswrapper[5029]: I0313 21:16:01.949726 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdhdt\" (UniqueName: \"kubernetes.io/projected/4b8cf9cf-1ce7-4122-a2df-680357bdb560-kube-api-access-zdhdt\") pod \"redhat-marketplace-rcc45\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:02 crc kubenswrapper[5029]: I0313 21:16:02.063089 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:02 crc kubenswrapper[5029]: I0313 21:16:02.498606 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" event={"ID":"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391","Type":"ContainerStarted","Data":"328361642bdd2473263b8f34cc501937de7477713b4bfd47ff33be5d10d70393"} Mar 13 21:16:02 crc kubenswrapper[5029]: I0313 21:16:02.525492 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" podStartSLOduration=1.611386498 podStartE2EDuration="2.525465313s" podCreationTimestamp="2026-03-13 21:16:00 +0000 UTC" firstStartedPulling="2026-03-13 21:16:00.938617596 +0000 UTC m=+2920.954699999" lastFinishedPulling="2026-03-13 21:16:01.852696411 +0000 UTC m=+2921.868778814" observedRunningTime="2026-03-13 21:16:02.515238133 +0000 UTC m=+2922.531320546" watchObservedRunningTime="2026-03-13 21:16:02.525465313 +0000 UTC m=+2922.541547716" Mar 13 21:16:02 crc kubenswrapper[5029]: I0313 21:16:02.567204 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcc45"] Mar 13 21:16:02 crc kubenswrapper[5029]: W0313 21:16:02.570967 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b8cf9cf_1ce7_4122_a2df_680357bdb560.slice/crio-77b08b1c82d794601f7034532eeed8dfec350884f5c0fd08d858e6217f741552 WatchSource:0}: Error finding container 77b08b1c82d794601f7034532eeed8dfec350884f5c0fd08d858e6217f741552: Status 404 returned error can't find the container with id 77b08b1c82d794601f7034532eeed8dfec350884f5c0fd08d858e6217f741552 Mar 13 21:16:02 crc kubenswrapper[5029]: I0313 21:16:02.719698 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:16:02 crc kubenswrapper[5029]: I0313 21:16:02.789809 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:16:03 crc kubenswrapper[5029]: I0313 21:16:03.511647 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced","Type":"ContainerStarted","Data":"f17cb623d390073b1880bcb1cd96b9be0c3a56713608483dd3e8b2dbe6c35ee4"} Mar 13 21:16:03 crc kubenswrapper[5029]: I0313 21:16:03.513730 5029 generic.go:334] "Generic (PLEG): container finished" podID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerID="e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220" exitCode=0 Mar 13 21:16:03 crc kubenswrapper[5029]: I0313 21:16:03.514145 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcc45" event={"ID":"4b8cf9cf-1ce7-4122-a2df-680357bdb560","Type":"ContainerDied","Data":"e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220"} Mar 13 21:16:03 crc kubenswrapper[5029]: I0313 21:16:03.514210 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcc45" event={"ID":"4b8cf9cf-1ce7-4122-a2df-680357bdb560","Type":"ContainerStarted","Data":"77b08b1c82d794601f7034532eeed8dfec350884f5c0fd08d858e6217f741552"} Mar 13 21:16:03 crc kubenswrapper[5029]: I0313 21:16:03.515984 5029 generic.go:334] "Generic (PLEG): container finished" podID="93c9968b-ac0a-4a0e-a4b7-bf4af5daa391" containerID="328361642bdd2473263b8f34cc501937de7477713b4bfd47ff33be5d10d70393" exitCode=0 Mar 13 21:16:03 crc kubenswrapper[5029]: I0313 21:16:03.516067 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" event={"ID":"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391","Type":"ContainerDied","Data":"328361642bdd2473263b8f34cc501937de7477713b4bfd47ff33be5d10d70393"} Mar 13 21:16:03 crc kubenswrapper[5029]: I0313 21:16:03.548382 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.74822598 podStartE2EDuration="52.548354306s" podCreationTimestamp="2026-03-13 21:15:11 +0000 UTC" firstStartedPulling="2026-03-13 21:15:13.24410005 +0000 UTC m=+2873.260182453" lastFinishedPulling="2026-03-13 21:16:01.044228386 +0000 UTC m=+2921.060310779" observedRunningTime="2026-03-13 21:16:03.532715278 +0000 UTC m=+2923.548797681" watchObservedRunningTime="2026-03-13 21:16:03.548354306 +0000 UTC m=+2923.564436709" Mar 13 21:16:04 crc kubenswrapper[5029]: I0313 21:16:04.536532 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcc45" event={"ID":"4b8cf9cf-1ce7-4122-a2df-680357bdb560","Type":"ContainerStarted","Data":"5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5"} Mar 13 21:16:04 crc kubenswrapper[5029]: I0313 21:16:04.950195 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.103781 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2knnk"] Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.104101 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2knnk" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="registry-server" containerID="cri-o://6b291de216ab0535b96e3a0b485b29355ae2e8f2a95ef573737ee9f92ffefb2a" gracePeriod=2 Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.107480 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xqh4\" (UniqueName: \"kubernetes.io/projected/93c9968b-ac0a-4a0e-a4b7-bf4af5daa391-kube-api-access-6xqh4\") pod \"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391\" (UID: \"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391\") " Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.118584 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c9968b-ac0a-4a0e-a4b7-bf4af5daa391-kube-api-access-6xqh4" (OuterVolumeSpecName: "kube-api-access-6xqh4") pod "93c9968b-ac0a-4a0e-a4b7-bf4af5daa391" (UID: "93c9968b-ac0a-4a0e-a4b7-bf4af5daa391"). InnerVolumeSpecName "kube-api-access-6xqh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.210833 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xqh4\" (UniqueName: \"kubernetes.io/projected/93c9968b-ac0a-4a0e-a4b7-bf4af5daa391-kube-api-access-6xqh4\") on node \"crc\" DevicePath \"\"" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.603966 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" event={"ID":"93c9968b-ac0a-4a0e-a4b7-bf4af5daa391","Type":"ContainerDied","Data":"e7b8c59931693ee4cdb7c3743ca091ee3eea0a5ae16165ebc99868b6010bb060"} Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.604477 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b8c59931693ee4cdb7c3743ca091ee3eea0a5ae16165ebc99868b6010bb060" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.604952 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557276-tnrnc" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.612486 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557270-2c9d9"] Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.623568 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557270-2c9d9"] Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.640661 5029 generic.go:334] "Generic (PLEG): container finished" podID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerID="6b291de216ab0535b96e3a0b485b29355ae2e8f2a95ef573737ee9f92ffefb2a" exitCode=0 Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.641503 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2knnk" event={"ID":"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2","Type":"ContainerDied","Data":"6b291de216ab0535b96e3a0b485b29355ae2e8f2a95ef573737ee9f92ffefb2a"} Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.641612 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2knnk" event={"ID":"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2","Type":"ContainerDied","Data":"1b88f0954f5501ff47cc2142e92f55c5fd41055f490ad1064ccb43d52662bcb0"} Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.641629 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b88f0954f5501ff47cc2142e92f55c5fd41055f490ad1064ccb43d52662bcb0" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.645299 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.732719 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4m2l\" (UniqueName: \"kubernetes.io/projected/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-kube-api-access-w4m2l\") pod \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.732911 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-utilities\") pod \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.732941 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-catalog-content\") pod \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\" (UID: \"ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2\") " Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.734920 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-utilities" (OuterVolumeSpecName: "utilities") pod "ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" (UID: "ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.741207 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-kube-api-access-w4m2l" (OuterVolumeSpecName: "kube-api-access-w4m2l") pod "ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" (UID: "ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2"). InnerVolumeSpecName "kube-api-access-w4m2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.792048 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" (UID: "ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.836557 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4m2l\" (UniqueName: \"kubernetes.io/projected/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-kube-api-access-w4m2l\") on node \"crc\" DevicePath \"\"" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.836625 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:16:05 crc kubenswrapper[5029]: I0313 21:16:05.836645 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:16:06 crc kubenswrapper[5029]: I0313 21:16:06.614341 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="048deb4f-2208-41fd-93b3-210f84bc8203" path="/var/lib/kubelet/pods/048deb4f-2208-41fd-93b3-210f84bc8203/volumes" Mar 13 21:16:06 crc kubenswrapper[5029]: I0313 21:16:06.651403 5029 generic.go:334] "Generic (PLEG): container finished" podID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerID="5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5" exitCode=0 Mar 13 21:16:06 crc kubenswrapper[5029]: I0313 21:16:06.651499 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcc45" event={"ID":"4b8cf9cf-1ce7-4122-a2df-680357bdb560","Type":"ContainerDied","Data":"5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5"} Mar 13 21:16:06 crc kubenswrapper[5029]: I0313 21:16:06.651518 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2knnk" Mar 13 21:16:06 crc kubenswrapper[5029]: I0313 21:16:06.710055 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2knnk"] Mar 13 21:16:06 crc kubenswrapper[5029]: I0313 21:16:06.722665 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2knnk"] Mar 13 21:16:07 crc kubenswrapper[5029]: I0313 21:16:07.662363 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcc45" event={"ID":"4b8cf9cf-1ce7-4122-a2df-680357bdb560","Type":"ContainerStarted","Data":"747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0"} Mar 13 21:16:07 crc kubenswrapper[5029]: I0313 21:16:07.689182 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rcc45" podStartSLOduration=3.104918778 podStartE2EDuration="6.689162517s" podCreationTimestamp="2026-03-13 21:16:01 +0000 UTC" firstStartedPulling="2026-03-13 21:16:03.517921293 +0000 UTC m=+2923.534003696" lastFinishedPulling="2026-03-13 21:16:07.102165032 +0000 UTC m=+2927.118247435" observedRunningTime="2026-03-13 21:16:07.67944049 +0000 UTC m=+2927.695522903" watchObservedRunningTime="2026-03-13 21:16:07.689162517 +0000 UTC m=+2927.705244920" Mar 13 21:16:08 crc kubenswrapper[5029]: I0313 21:16:08.612587 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" path="/var/lib/kubelet/pods/ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2/volumes" Mar 13 21:16:12 crc kubenswrapper[5029]: I0313 21:16:12.064579 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:12 crc kubenswrapper[5029]: I0313 21:16:12.065249 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:12 crc kubenswrapper[5029]: I0313 21:16:12.114285 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:12 crc kubenswrapper[5029]: I0313 21:16:12.783225 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:12 crc kubenswrapper[5029]: I0313 21:16:12.936975 5029 scope.go:117] "RemoveContainer" containerID="473fc57923e7a6b3c85837a607f6246ef10e4485b9ed990c8abc5ac06b267b8f" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.100293 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcc45"] Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.101190 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rcc45" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerName="registry-server" containerID="cri-o://747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0" gracePeriod=2 Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.642012 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.759284 5029 generic.go:334] "Generic (PLEG): container finished" podID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerID="747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0" exitCode=0 Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.759348 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcc45" event={"ID":"4b8cf9cf-1ce7-4122-a2df-680357bdb560","Type":"ContainerDied","Data":"747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0"} Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.759368 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcc45" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.759404 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcc45" event={"ID":"4b8cf9cf-1ce7-4122-a2df-680357bdb560","Type":"ContainerDied","Data":"77b08b1c82d794601f7034532eeed8dfec350884f5c0fd08d858e6217f741552"} Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.759424 5029 scope.go:117] "RemoveContainer" containerID="747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.778419 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-utilities\") pod \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.778577 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdhdt\" (UniqueName: \"kubernetes.io/projected/4b8cf9cf-1ce7-4122-a2df-680357bdb560-kube-api-access-zdhdt\") pod \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.778647 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-catalog-content\") pod \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\" (UID: \"4b8cf9cf-1ce7-4122-a2df-680357bdb560\") " Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.779608 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-utilities" (OuterVolumeSpecName: "utilities") pod "4b8cf9cf-1ce7-4122-a2df-680357bdb560" (UID: "4b8cf9cf-1ce7-4122-a2df-680357bdb560"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.781440 5029 scope.go:117] "RemoveContainer" containerID="5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.786324 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8cf9cf-1ce7-4122-a2df-680357bdb560-kube-api-access-zdhdt" (OuterVolumeSpecName: "kube-api-access-zdhdt") pod "4b8cf9cf-1ce7-4122-a2df-680357bdb560" (UID: "4b8cf9cf-1ce7-4122-a2df-680357bdb560"). InnerVolumeSpecName "kube-api-access-zdhdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.863704 5029 scope.go:117] "RemoveContainer" containerID="e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.882127 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.882158 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdhdt\" (UniqueName: \"kubernetes.io/projected/4b8cf9cf-1ce7-4122-a2df-680357bdb560-kube-api-access-zdhdt\") on node \"crc\" DevicePath \"\"" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.912985 5029 scope.go:117] "RemoveContainer" containerID="747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0" Mar 13 21:16:15 crc kubenswrapper[5029]: E0313 21:16:15.913836 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0\": container with ID starting with 747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0 not found: ID does not exist" containerID="747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.913922 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0"} err="failed to get container status \"747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0\": rpc error: code = NotFound desc = could not find container \"747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0\": container with ID starting with 747e5c46e025d389548375728c565955edc44aef26eacfe13e9f7afaa13f21b0 not found: ID does not exist" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.914058 5029 scope.go:117] "RemoveContainer" containerID="5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5" Mar 13 21:16:15 crc kubenswrapper[5029]: E0313 21:16:15.914780 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5\": container with ID starting with 5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5 not found: ID does not exist" containerID="5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.914811 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5"} err="failed to get container status \"5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5\": rpc error: code = NotFound desc = could not find container \"5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5\": container with ID starting with 5c71b40f71b7a19d8dc22f852f27c5725471252e858016b6d39d9d68b7cd6ca5 not found: ID does not exist" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.914840 5029 scope.go:117] "RemoveContainer" containerID="e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220" Mar 13 21:16:15 crc kubenswrapper[5029]: E0313 21:16:15.915191 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220\": container with ID starting with e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220 not found: ID does not exist" containerID="e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220" Mar 13 21:16:15 crc kubenswrapper[5029]: I0313 21:16:15.915228 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220"} err="failed to get container status \"e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220\": rpc error: code = NotFound desc = could not find container \"e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220\": container with ID starting with e0b2b279de284ab126d2956589058086f53a40effc8f87843c93f19cd0758220 not found: ID does not exist" Mar 13 21:16:16 crc kubenswrapper[5029]: I0313 21:16:16.689128 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b8cf9cf-1ce7-4122-a2df-680357bdb560" (UID: "4b8cf9cf-1ce7-4122-a2df-680357bdb560"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:16:16 crc kubenswrapper[5029]: I0313 21:16:16.700512 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8cf9cf-1ce7-4122-a2df-680357bdb560-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:16:17 crc kubenswrapper[5029]: I0313 21:16:17.002061 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcc45"] Mar 13 21:16:17 crc kubenswrapper[5029]: I0313 21:16:17.014431 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcc45"] Mar 13 21:16:18 crc kubenswrapper[5029]: I0313 21:16:18.611261 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" path="/var/lib/kubelet/pods/4b8cf9cf-1ce7-4122-a2df-680357bdb560/volumes" Mar 13 21:16:31 crc kubenswrapper[5029]: I0313 21:16:31.950334 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:16:31 crc kubenswrapper[5029]: I0313 21:16:31.952078 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:17:01 crc kubenswrapper[5029]: I0313 21:17:01.950912 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:17:01 crc kubenswrapper[5029]: I0313 21:17:01.951671 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:17:31 crc kubenswrapper[5029]: I0313 21:17:31.950119 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:17:31 crc kubenswrapper[5029]: I0313 21:17:31.950661 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:17:31 crc kubenswrapper[5029]: I0313 21:17:31.950725 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:17:31 crc kubenswrapper[5029]: I0313 21:17:31.951954 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbc3255f6dca2b689804649d6ad92dd64d96b59e7450a9a1b5ec9bc9251f2fa4"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:17:31 crc kubenswrapper[5029]: I0313 21:17:31.952028 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://cbc3255f6dca2b689804649d6ad92dd64d96b59e7450a9a1b5ec9bc9251f2fa4" gracePeriod=600 Mar 13 21:17:32 crc kubenswrapper[5029]: I0313 21:17:32.522403 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="cbc3255f6dca2b689804649d6ad92dd64d96b59e7450a9a1b5ec9bc9251f2fa4" exitCode=0 Mar 13 21:17:32 crc kubenswrapper[5029]: I0313 21:17:32.522449 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"cbc3255f6dca2b689804649d6ad92dd64d96b59e7450a9a1b5ec9bc9251f2fa4"} Mar 13 21:17:32 crc kubenswrapper[5029]: I0313 21:17:32.522929 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a"} Mar 13 21:17:32 crc kubenswrapper[5029]: I0313 21:17:32.522961 5029 scope.go:117] "RemoveContainer" containerID="cd9f5ea8768f7977fc8cd2dd7e4a297d4cb89d19a8db76d844982968c93fc1db" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.158230 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557278-6qsgh"] Mar 13 21:18:00 crc kubenswrapper[5029]: E0313 21:18:00.159284 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerName="extract-content" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159299 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerName="extract-content" Mar 13 21:18:00 crc kubenswrapper[5029]: E0313 21:18:00.159320 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="extract-content" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159326 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="extract-content" Mar 13 21:18:00 crc kubenswrapper[5029]: E0313 21:18:00.159338 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c9968b-ac0a-4a0e-a4b7-bf4af5daa391" containerName="oc" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159345 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c9968b-ac0a-4a0e-a4b7-bf4af5daa391" containerName="oc" Mar 13 21:18:00 crc kubenswrapper[5029]: E0313 21:18:00.159355 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="extract-utilities" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159360 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="extract-utilities" Mar 13 21:18:00 crc kubenswrapper[5029]: E0313 21:18:00.159395 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="registry-server" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159401 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="registry-server" Mar 13 21:18:00 crc kubenswrapper[5029]: E0313 21:18:00.159410 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerName="extract-utilities" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159417 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerName="extract-utilities" Mar 13 21:18:00 crc kubenswrapper[5029]: E0313 21:18:00.159429 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerName="registry-server" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159435 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerName="registry-server" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159606 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8cf9cf-1ce7-4122-a2df-680357bdb560" containerName="registry-server" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159627 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2dfe05-8ff7-4a8c-a672-3ff57b9211d2" containerName="registry-server" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.159643 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c9968b-ac0a-4a0e-a4b7-bf4af5daa391" containerName="oc" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.160457 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557278-6qsgh" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.164066 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.164076 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.164675 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.175293 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557278-6qsgh"] Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.339223 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwsd\" (UniqueName: \"kubernetes.io/projected/1c11dea9-301a-41b7-a829-36cbdd1e7158-kube-api-access-spwsd\") pod \"auto-csr-approver-29557278-6qsgh\" (UID: \"1c11dea9-301a-41b7-a829-36cbdd1e7158\") " pod="openshift-infra/auto-csr-approver-29557278-6qsgh" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.441804 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spwsd\" (UniqueName: \"kubernetes.io/projected/1c11dea9-301a-41b7-a829-36cbdd1e7158-kube-api-access-spwsd\") pod \"auto-csr-approver-29557278-6qsgh\" (UID: \"1c11dea9-301a-41b7-a829-36cbdd1e7158\") " pod="openshift-infra/auto-csr-approver-29557278-6qsgh" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.472063 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spwsd\" (UniqueName: \"kubernetes.io/projected/1c11dea9-301a-41b7-a829-36cbdd1e7158-kube-api-access-spwsd\") pod \"auto-csr-approver-29557278-6qsgh\" (UID: \"1c11dea9-301a-41b7-a829-36cbdd1e7158\") " pod="openshift-infra/auto-csr-approver-29557278-6qsgh" Mar 13 21:18:00 crc kubenswrapper[5029]: I0313 21:18:00.483003 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557278-6qsgh" Mar 13 21:18:01 crc kubenswrapper[5029]: I0313 21:18:01.233341 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557278-6qsgh"] Mar 13 21:18:01 crc kubenswrapper[5029]: I0313 21:18:01.850950 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557278-6qsgh" event={"ID":"1c11dea9-301a-41b7-a829-36cbdd1e7158","Type":"ContainerStarted","Data":"e465776296cfa59df79df106585eb079cd1c5c7faa94e55582828393dbd26061"} Mar 13 21:18:02 crc kubenswrapper[5029]: I0313 21:18:02.860225 5029 generic.go:334] "Generic (PLEG): container finished" podID="1c11dea9-301a-41b7-a829-36cbdd1e7158" containerID="96ef8784deff088c39b0cd00412c5f6b75bab7abec4302cfedb16e6dafc1e6ab" exitCode=0 Mar 13 21:18:02 crc kubenswrapper[5029]: I0313 21:18:02.860471 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557278-6qsgh" event={"ID":"1c11dea9-301a-41b7-a829-36cbdd1e7158","Type":"ContainerDied","Data":"96ef8784deff088c39b0cd00412c5f6b75bab7abec4302cfedb16e6dafc1e6ab"} Mar 13 21:18:04 crc kubenswrapper[5029]: I0313 21:18:04.432105 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557278-6qsgh" Mar 13 21:18:04 crc kubenswrapper[5029]: I0313 21:18:04.448328 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spwsd\" (UniqueName: \"kubernetes.io/projected/1c11dea9-301a-41b7-a829-36cbdd1e7158-kube-api-access-spwsd\") pod \"1c11dea9-301a-41b7-a829-36cbdd1e7158\" (UID: \"1c11dea9-301a-41b7-a829-36cbdd1e7158\") " Mar 13 21:18:04 crc kubenswrapper[5029]: I0313 21:18:04.456729 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c11dea9-301a-41b7-a829-36cbdd1e7158-kube-api-access-spwsd" (OuterVolumeSpecName: "kube-api-access-spwsd") pod "1c11dea9-301a-41b7-a829-36cbdd1e7158" (UID: "1c11dea9-301a-41b7-a829-36cbdd1e7158"). InnerVolumeSpecName "kube-api-access-spwsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:18:04 crc kubenswrapper[5029]: I0313 21:18:04.554076 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spwsd\" (UniqueName: \"kubernetes.io/projected/1c11dea9-301a-41b7-a829-36cbdd1e7158-kube-api-access-spwsd\") on node \"crc\" DevicePath \"\"" Mar 13 21:18:04 crc kubenswrapper[5029]: E0313 21:18:04.818363 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c11dea9_301a_41b7_a829_36cbdd1e7158.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c11dea9_301a_41b7_a829_36cbdd1e7158.slice/crio-e465776296cfa59df79df106585eb079cd1c5c7faa94e55582828393dbd26061\": RecentStats: unable to find data in memory cache]" Mar 13 21:18:04 crc kubenswrapper[5029]: I0313 21:18:04.879444 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557278-6qsgh" event={"ID":"1c11dea9-301a-41b7-a829-36cbdd1e7158","Type":"ContainerDied","Data":"e465776296cfa59df79df106585eb079cd1c5c7faa94e55582828393dbd26061"} Mar 13 21:18:04 crc kubenswrapper[5029]: I0313 21:18:04.879508 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e465776296cfa59df79df106585eb079cd1c5c7faa94e55582828393dbd26061" Mar 13 21:18:04 crc kubenswrapper[5029]: I0313 21:18:04.879514 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557278-6qsgh" Mar 13 21:18:05 crc kubenswrapper[5029]: I0313 21:18:05.529272 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557272-2j7hk"] Mar 13 21:18:05 crc kubenswrapper[5029]: I0313 21:18:05.540447 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557272-2j7hk"] Mar 13 21:18:06 crc kubenswrapper[5029]: I0313 21:18:06.610966 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713e3ec7-dbc6-44b7-81b3-36043df72d54" path="/var/lib/kubelet/pods/713e3ec7-dbc6-44b7-81b3-36043df72d54/volumes" Mar 13 21:18:13 crc kubenswrapper[5029]: I0313 21:18:13.062478 5029 scope.go:117] "RemoveContainer" containerID="f92fbd6927377431aa623fedb2fbccb171b557a645a63d6c863d587e1342d679" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.158033 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557280-t7jqq"] Mar 13 21:20:00 crc kubenswrapper[5029]: E0313 21:20:00.159247 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c11dea9-301a-41b7-a829-36cbdd1e7158" containerName="oc" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.159269 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c11dea9-301a-41b7-a829-36cbdd1e7158" containerName="oc" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.159553 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c11dea9-301a-41b7-a829-36cbdd1e7158" containerName="oc" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.160491 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557280-t7jqq" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.164036 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.164905 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.166547 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.175297 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557280-t7jqq"] Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.257899 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brgq\" (UniqueName: \"kubernetes.io/projected/f515de46-72f7-4aae-a11d-a7aa2aa3e40a-kube-api-access-5brgq\") pod \"auto-csr-approver-29557280-t7jqq\" (UID: \"f515de46-72f7-4aae-a11d-a7aa2aa3e40a\") " pod="openshift-infra/auto-csr-approver-29557280-t7jqq" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.365950 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brgq\" (UniqueName: \"kubernetes.io/projected/f515de46-72f7-4aae-a11d-a7aa2aa3e40a-kube-api-access-5brgq\") pod \"auto-csr-approver-29557280-t7jqq\" (UID: \"f515de46-72f7-4aae-a11d-a7aa2aa3e40a\") " pod="openshift-infra/auto-csr-approver-29557280-t7jqq" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.390813 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brgq\" (UniqueName: \"kubernetes.io/projected/f515de46-72f7-4aae-a11d-a7aa2aa3e40a-kube-api-access-5brgq\") pod \"auto-csr-approver-29557280-t7jqq\" (UID: \"f515de46-72f7-4aae-a11d-a7aa2aa3e40a\") " pod="openshift-infra/auto-csr-approver-29557280-t7jqq" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.484934 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557280-t7jqq" Mar 13 21:20:00 crc kubenswrapper[5029]: I0313 21:20:00.996687 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557280-t7jqq"] Mar 13 21:20:01 crc kubenswrapper[5029]: I0313 21:20:01.091812 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557280-t7jqq" event={"ID":"f515de46-72f7-4aae-a11d-a7aa2aa3e40a","Type":"ContainerStarted","Data":"6b4e917d55d40bdfdaa6bd5b3d4a58ea1d15aaf5842269530907ecd929aa8842"} Mar 13 21:20:01 crc kubenswrapper[5029]: I0313 21:20:01.950430 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:20:01 crc kubenswrapper[5029]: I0313 21:20:01.950779 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:20:03 crc kubenswrapper[5029]: I0313 21:20:03.117043 5029 generic.go:334] "Generic (PLEG): container finished" podID="f515de46-72f7-4aae-a11d-a7aa2aa3e40a" containerID="e84cb40862f3a514da164f491b74683a39d46f9cde544c41ffbdb9228900f3c1" exitCode=0 Mar 13 21:20:03 crc kubenswrapper[5029]: I0313 21:20:03.117315 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557280-t7jqq" event={"ID":"f515de46-72f7-4aae-a11d-a7aa2aa3e40a","Type":"ContainerDied","Data":"e84cb40862f3a514da164f491b74683a39d46f9cde544c41ffbdb9228900f3c1"} Mar 13 21:20:04 crc kubenswrapper[5029]: I0313 21:20:04.776622 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557280-t7jqq" Mar 13 21:20:04 crc kubenswrapper[5029]: I0313 21:20:04.911315 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5brgq\" (UniqueName: \"kubernetes.io/projected/f515de46-72f7-4aae-a11d-a7aa2aa3e40a-kube-api-access-5brgq\") pod \"f515de46-72f7-4aae-a11d-a7aa2aa3e40a\" (UID: \"f515de46-72f7-4aae-a11d-a7aa2aa3e40a\") " Mar 13 21:20:04 crc kubenswrapper[5029]: I0313 21:20:04.936249 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f515de46-72f7-4aae-a11d-a7aa2aa3e40a-kube-api-access-5brgq" (OuterVolumeSpecName: "kube-api-access-5brgq") pod "f515de46-72f7-4aae-a11d-a7aa2aa3e40a" (UID: "f515de46-72f7-4aae-a11d-a7aa2aa3e40a"). InnerVolumeSpecName "kube-api-access-5brgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:20:05 crc kubenswrapper[5029]: I0313 21:20:05.016236 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5brgq\" (UniqueName: \"kubernetes.io/projected/f515de46-72f7-4aae-a11d-a7aa2aa3e40a-kube-api-access-5brgq\") on node \"crc\" DevicePath \"\"" Mar 13 21:20:05 crc kubenswrapper[5029]: I0313 21:20:05.140350 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557280-t7jqq" event={"ID":"f515de46-72f7-4aae-a11d-a7aa2aa3e40a","Type":"ContainerDied","Data":"6b4e917d55d40bdfdaa6bd5b3d4a58ea1d15aaf5842269530907ecd929aa8842"} Mar 13 21:20:05 crc kubenswrapper[5029]: I0313 21:20:05.140940 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4e917d55d40bdfdaa6bd5b3d4a58ea1d15aaf5842269530907ecd929aa8842" Mar 13 21:20:05 crc kubenswrapper[5029]: I0313 21:20:05.140661 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557280-t7jqq" Mar 13 21:20:05 crc kubenswrapper[5029]: I0313 21:20:05.861814 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557274-nxg42"] Mar 13 21:20:05 crc kubenswrapper[5029]: I0313 21:20:05.880469 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557274-nxg42"] Mar 13 21:20:06 crc kubenswrapper[5029]: I0313 21:20:06.611020 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496521b2-1479-451a-a065-b8609d0eac95" path="/var/lib/kubelet/pods/496521b2-1479-451a-a065-b8609d0eac95/volumes" Mar 13 21:20:13 crc kubenswrapper[5029]: I0313 21:20:13.187195 5029 scope.go:117] "RemoveContainer" containerID="6f70965bf0f49466d266f5c261ca899d3ed3146c20e6643eb2c472e754a06397" Mar 13 21:20:31 crc kubenswrapper[5029]: I0313 21:20:31.950792 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:20:31 crc kubenswrapper[5029]: I0313 21:20:31.951876 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:21:01 crc kubenswrapper[5029]: I0313 21:21:01.950088 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:21:01 crc kubenswrapper[5029]: I0313 21:21:01.950761 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:21:01 crc kubenswrapper[5029]: I0313 21:21:01.950824 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:21:01 crc kubenswrapper[5029]: I0313 21:21:01.951577 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:21:01 crc kubenswrapper[5029]: I0313 21:21:01.951641 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" gracePeriod=600 Mar 13 21:21:02 crc kubenswrapper[5029]: E0313 21:21:02.073907 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:21:02 crc kubenswrapper[5029]: I0313 21:21:02.739298 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" exitCode=0 Mar 13 21:21:02 crc kubenswrapper[5029]: I0313 21:21:02.739990 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a"} Mar 13 21:21:02 crc kubenswrapper[5029]: I0313 21:21:02.740099 5029 scope.go:117] "RemoveContainer" containerID="cbc3255f6dca2b689804649d6ad92dd64d96b59e7450a9a1b5ec9bc9251f2fa4" Mar 13 21:21:02 crc kubenswrapper[5029]: I0313 21:21:02.741032 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:21:02 crc kubenswrapper[5029]: E0313 21:21:02.741364 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:21:13 crc kubenswrapper[5029]: I0313 21:21:13.599900 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:21:13 crc kubenswrapper[5029]: E0313 21:21:13.600868 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:21:24 crc kubenswrapper[5029]: I0313 21:21:24.600707 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:21:24 crc kubenswrapper[5029]: E0313 21:21:24.601816 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:21:39 crc kubenswrapper[5029]: I0313 21:21:39.600743 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:21:39 crc kubenswrapper[5029]: E0313 21:21:39.602043 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:21:51 crc kubenswrapper[5029]: I0313 21:21:51.600896 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:21:51 crc kubenswrapper[5029]: E0313 21:21:51.601672 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.175295 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557282-qm888"] Mar 13 21:22:00 crc kubenswrapper[5029]: E0313 21:22:00.176345 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f515de46-72f7-4aae-a11d-a7aa2aa3e40a" containerName="oc" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.176360 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="f515de46-72f7-4aae-a11d-a7aa2aa3e40a" containerName="oc" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.176625 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="f515de46-72f7-4aae-a11d-a7aa2aa3e40a" containerName="oc" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.177372 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557282-qm888" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.179602 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.180352 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.181094 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.192965 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557282-qm888"] Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.245668 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2p7m\" (UniqueName: \"kubernetes.io/projected/7dcee36b-c758-47ab-9e5e-5e8964ea5bdf-kube-api-access-m2p7m\") pod \"auto-csr-approver-29557282-qm888\" (UID: \"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf\") " pod="openshift-infra/auto-csr-approver-29557282-qm888" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.347649 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2p7m\" (UniqueName: \"kubernetes.io/projected/7dcee36b-c758-47ab-9e5e-5e8964ea5bdf-kube-api-access-m2p7m\") pod \"auto-csr-approver-29557282-qm888\" (UID: \"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf\") " pod="openshift-infra/auto-csr-approver-29557282-qm888" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.370948 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2p7m\" (UniqueName: \"kubernetes.io/projected/7dcee36b-c758-47ab-9e5e-5e8964ea5bdf-kube-api-access-m2p7m\") pod \"auto-csr-approver-29557282-qm888\" (UID: \"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf\") " pod="openshift-infra/auto-csr-approver-29557282-qm888" Mar 13 21:22:00 crc kubenswrapper[5029]: I0313 21:22:00.501450 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557282-qm888" Mar 13 21:22:01 crc kubenswrapper[5029]: I0313 21:22:01.046130 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557282-qm888"] Mar 13 21:22:01 crc kubenswrapper[5029]: I0313 21:22:01.054449 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:22:01 crc kubenswrapper[5029]: I0313 21:22:01.344358 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557282-qm888" event={"ID":"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf","Type":"ContainerStarted","Data":"88f20b573610a7669ee9f12f456dcf3cf779013cbed66bc4354ba86d554659f3"} Mar 13 21:22:03 crc kubenswrapper[5029]: I0313 21:22:03.368578 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557282-qm888" event={"ID":"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf","Type":"ContainerStarted","Data":"621f324f272f5c956a96f5b9de6d6216222f319fc2849ac2a1b5107da59c3731"} Mar 13 21:22:03 crc kubenswrapper[5029]: I0313 21:22:03.397467 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557282-qm888" podStartSLOduration=1.408135994 podStartE2EDuration="3.397438451s" podCreationTimestamp="2026-03-13 21:22:00 +0000 UTC" firstStartedPulling="2026-03-13 21:22:01.054254525 +0000 UTC m=+3281.070336928" lastFinishedPulling="2026-03-13 21:22:03.043556982 +0000 UTC m=+3283.059639385" observedRunningTime="2026-03-13 21:22:03.384305221 +0000 UTC m=+3283.400387624" watchObservedRunningTime="2026-03-13 21:22:03.397438451 +0000 UTC m=+3283.413520854" Mar 13 21:22:03 crc kubenswrapper[5029]: I0313 21:22:03.600210 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:22:03 crc kubenswrapper[5029]: E0313 21:22:03.600560 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:22:04 crc kubenswrapper[5029]: I0313 21:22:04.401045 5029 generic.go:334] "Generic (PLEG): container finished" podID="7dcee36b-c758-47ab-9e5e-5e8964ea5bdf" containerID="621f324f272f5c956a96f5b9de6d6216222f319fc2849ac2a1b5107da59c3731" exitCode=0 Mar 13 21:22:04 crc kubenswrapper[5029]: I0313 21:22:04.401358 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557282-qm888" event={"ID":"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf","Type":"ContainerDied","Data":"621f324f272f5c956a96f5b9de6d6216222f319fc2849ac2a1b5107da59c3731"} Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.091762 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557282-qm888" Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.174358 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2p7m\" (UniqueName: \"kubernetes.io/projected/7dcee36b-c758-47ab-9e5e-5e8964ea5bdf-kube-api-access-m2p7m\") pod \"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf\" (UID: \"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf\") " Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.194126 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dcee36b-c758-47ab-9e5e-5e8964ea5bdf-kube-api-access-m2p7m" (OuterVolumeSpecName: "kube-api-access-m2p7m") pod "7dcee36b-c758-47ab-9e5e-5e8964ea5bdf" (UID: "7dcee36b-c758-47ab-9e5e-5e8964ea5bdf"). InnerVolumeSpecName "kube-api-access-m2p7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.277390 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2p7m\" (UniqueName: \"kubernetes.io/projected/7dcee36b-c758-47ab-9e5e-5e8964ea5bdf-kube-api-access-m2p7m\") on node \"crc\" DevicePath \"\"" Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.422647 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557282-qm888" event={"ID":"7dcee36b-c758-47ab-9e5e-5e8964ea5bdf","Type":"ContainerDied","Data":"88f20b573610a7669ee9f12f456dcf3cf779013cbed66bc4354ba86d554659f3"} Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.422691 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f20b573610a7669ee9f12f456dcf3cf779013cbed66bc4354ba86d554659f3" Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.422751 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557282-qm888" Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.466144 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557276-tnrnc"] Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.476634 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557276-tnrnc"] Mar 13 21:22:06 crc kubenswrapper[5029]: I0313 21:22:06.612496 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c9968b-ac0a-4a0e-a4b7-bf4af5daa391" path="/var/lib/kubelet/pods/93c9968b-ac0a-4a0e-a4b7-bf4af5daa391/volumes" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.643194 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bmc2c"] Mar 13 21:22:09 crc kubenswrapper[5029]: E0313 21:22:09.644503 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dcee36b-c758-47ab-9e5e-5e8964ea5bdf" containerName="oc" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.644523 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dcee36b-c758-47ab-9e5e-5e8964ea5bdf" containerName="oc" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.644747 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dcee36b-c758-47ab-9e5e-5e8964ea5bdf" containerName="oc" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.646418 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.661343 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmc2c"] Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.753526 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ttv\" (UniqueName: \"kubernetes.io/projected/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-kube-api-access-82ttv\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.753690 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-catalog-content\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.753763 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-utilities\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.856151 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ttv\" (UniqueName: \"kubernetes.io/projected/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-kube-api-access-82ttv\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.856346 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-catalog-content\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.856426 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-utilities\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.857118 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-catalog-content\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.857187 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-utilities\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.887810 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ttv\" (UniqueName: \"kubernetes.io/projected/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-kube-api-access-82ttv\") pod \"certified-operators-bmc2c\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:09 crc kubenswrapper[5029]: I0313 21:22:09.982271 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:10 crc kubenswrapper[5029]: I0313 21:22:10.626102 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmc2c"] Mar 13 21:22:11 crc kubenswrapper[5029]: I0313 21:22:11.476513 5029 generic.go:334] "Generic (PLEG): container finished" podID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerID="65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9" exitCode=0 Mar 13 21:22:11 crc kubenswrapper[5029]: I0313 21:22:11.476780 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmc2c" event={"ID":"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0","Type":"ContainerDied","Data":"65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9"} Mar 13 21:22:11 crc kubenswrapper[5029]: I0313 21:22:11.476809 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmc2c" event={"ID":"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0","Type":"ContainerStarted","Data":"64bfb3668c569a4f6af380d20efdc0543a0139214388ef36715cf207d24b8e9e"} Mar 13 21:22:12 crc kubenswrapper[5029]: I0313 21:22:12.490397 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmc2c" event={"ID":"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0","Type":"ContainerStarted","Data":"8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4"} Mar 13 21:22:13 crc kubenswrapper[5029]: I0313 21:22:13.303808 5029 scope.go:117] "RemoveContainer" containerID="328361642bdd2473263b8f34cc501937de7477713b4bfd47ff33be5d10d70393" Mar 13 21:22:13 crc kubenswrapper[5029]: I0313 21:22:13.354063 5029 scope.go:117] "RemoveContainer" containerID="1bdcf42bb92184b1324a556d18c1a156723ede1d3c9da235d451f98f411fec41" Mar 13 21:22:13 crc kubenswrapper[5029]: I0313 21:22:13.397793 5029 scope.go:117] "RemoveContainer" containerID="edd638fe53972b96d25a068511a646093d9929255f77dcfda2bda76c2a47ba15" Mar 13 21:22:13 crc kubenswrapper[5029]: I0313 21:22:13.477720 5029 scope.go:117] "RemoveContainer" containerID="6b291de216ab0535b96e3a0b485b29355ae2e8f2a95ef573737ee9f92ffefb2a" Mar 13 21:22:14 crc kubenswrapper[5029]: I0313 21:22:14.520574 5029 generic.go:334] "Generic (PLEG): container finished" podID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerID="8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4" exitCode=0 Mar 13 21:22:14 crc kubenswrapper[5029]: I0313 21:22:14.520676 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmc2c" event={"ID":"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0","Type":"ContainerDied","Data":"8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4"} Mar 13 21:22:15 crc kubenswrapper[5029]: I0313 21:22:15.535842 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmc2c" event={"ID":"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0","Type":"ContainerStarted","Data":"defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc"} Mar 13 21:22:15 crc kubenswrapper[5029]: I0313 21:22:15.558748 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bmc2c" podStartSLOduration=3.112188542 podStartE2EDuration="6.558728017s" podCreationTimestamp="2026-03-13 21:22:09 +0000 UTC" firstStartedPulling="2026-03-13 21:22:11.492609012 +0000 UTC m=+3291.508691415" lastFinishedPulling="2026-03-13 21:22:14.939148487 +0000 UTC m=+3294.955230890" observedRunningTime="2026-03-13 21:22:15.555867999 +0000 UTC m=+3295.571950412" watchObservedRunningTime="2026-03-13 21:22:15.558728017 +0000 UTC m=+3295.574810420" Mar 13 21:22:17 crc kubenswrapper[5029]: I0313 21:22:17.599611 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:22:17 crc kubenswrapper[5029]: E0313 21:22:17.600252 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:22:19 crc kubenswrapper[5029]: I0313 21:22:19.984767 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:19 crc kubenswrapper[5029]: I0313 21:22:19.984818 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:20 crc kubenswrapper[5029]: I0313 21:22:20.037360 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:20 crc kubenswrapper[5029]: I0313 21:22:20.671024 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:20 crc kubenswrapper[5029]: I0313 21:22:20.730816 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmc2c"] Mar 13 21:22:22 crc kubenswrapper[5029]: I0313 21:22:22.625372 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bmc2c" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerName="registry-server" containerID="cri-o://defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc" gracePeriod=2 Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.542432 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.640213 5029 generic.go:334] "Generic (PLEG): container finished" podID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerID="defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc" exitCode=0 Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.640331 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmc2c" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.640348 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmc2c" event={"ID":"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0","Type":"ContainerDied","Data":"defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc"} Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.640753 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmc2c" event={"ID":"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0","Type":"ContainerDied","Data":"64bfb3668c569a4f6af380d20efdc0543a0139214388ef36715cf207d24b8e9e"} Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.640782 5029 scope.go:117] "RemoveContainer" containerID="defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.668419 5029 scope.go:117] "RemoveContainer" containerID="8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.690820 5029 scope.go:117] "RemoveContainer" containerID="65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.710394 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-catalog-content\") pod \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.710745 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-utilities\") pod \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.711130 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82ttv\" (UniqueName: \"kubernetes.io/projected/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-kube-api-access-82ttv\") pod \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\" (UID: \"bc5c52e2-4c52-45a1-aa12-27a1e2effaa0\") " Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.711618 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-utilities" (OuterVolumeSpecName: "utilities") pod "bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" (UID: "bc5c52e2-4c52-45a1-aa12-27a1e2effaa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.714665 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.722060 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-kube-api-access-82ttv" (OuterVolumeSpecName: "kube-api-access-82ttv") pod "bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" (UID: "bc5c52e2-4c52-45a1-aa12-27a1e2effaa0"). InnerVolumeSpecName "kube-api-access-82ttv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.787142 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" (UID: "bc5c52e2-4c52-45a1-aa12-27a1e2effaa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.801258 5029 scope.go:117] "RemoveContainer" containerID="defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc" Mar 13 21:22:23 crc kubenswrapper[5029]: E0313 21:22:23.802146 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc\": container with ID starting with defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc not found: ID does not exist" containerID="defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.802220 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc"} err="failed to get container status \"defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc\": rpc error: code = NotFound desc = could not find container \"defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc\": container with ID starting with defb47454afc7abb6554588baf12ee6455af842aac30127a2a6a721ccf27d7cc not found: ID does not exist" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.802264 5029 scope.go:117] "RemoveContainer" containerID="8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4" Mar 13 21:22:23 crc kubenswrapper[5029]: E0313 21:22:23.802885 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4\": container with ID starting with 8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4 not found: ID does not exist" containerID="8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.802923 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4"} err="failed to get container status \"8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4\": rpc error: code = NotFound desc = could not find container \"8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4\": container with ID starting with 8b665fcef25fe1a117fe15d8b86ba8200cf7b9cd714e92a835f0a031859300b4 not found: ID does not exist" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.802947 5029 scope.go:117] "RemoveContainer" containerID="65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9" Mar 13 21:22:23 crc kubenswrapper[5029]: E0313 21:22:23.803404 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9\": container with ID starting with 65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9 not found: ID does not exist" containerID="65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.803476 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9"} err="failed to get container status \"65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9\": rpc error: code = NotFound desc = could not find container \"65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9\": container with ID starting with 65f9fc517096c90df70578ded8f12f12c9c2d85a4bf1ba5beef665ddf52d86a9 not found: ID does not exist" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.817420 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.817473 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82ttv\" (UniqueName: \"kubernetes.io/projected/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0-kube-api-access-82ttv\") on node \"crc\" DevicePath \"\"" Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.978253 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmc2c"] Mar 13 21:22:23 crc kubenswrapper[5029]: I0313 21:22:23.987831 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bmc2c"] Mar 13 21:22:24 crc kubenswrapper[5029]: I0313 21:22:24.615173 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" path="/var/lib/kubelet/pods/bc5c52e2-4c52-45a1-aa12-27a1e2effaa0/volumes" Mar 13 21:22:28 crc kubenswrapper[5029]: I0313 21:22:28.600878 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:22:28 crc kubenswrapper[5029]: E0313 21:22:28.601776 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:22:43 crc kubenswrapper[5029]: I0313 21:22:43.599815 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:22:43 crc kubenswrapper[5029]: E0313 21:22:43.600614 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:22:54 crc kubenswrapper[5029]: I0313 21:22:54.600577 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:22:54 crc kubenswrapper[5029]: E0313 21:22:54.601913 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:23:05 crc kubenswrapper[5029]: I0313 21:23:05.600823 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:23:05 crc kubenswrapper[5029]: E0313 21:23:05.602144 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:23:17 crc kubenswrapper[5029]: I0313 21:23:17.599658 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:23:17 crc kubenswrapper[5029]: E0313 21:23:17.600629 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:23:32 crc kubenswrapper[5029]: I0313 21:23:32.600135 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:23:32 crc kubenswrapper[5029]: E0313 21:23:32.601370 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:23:44 crc kubenswrapper[5029]: I0313 21:23:44.600697 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:23:44 crc kubenswrapper[5029]: E0313 21:23:44.602458 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:23:57 crc kubenswrapper[5029]: I0313 21:23:57.599991 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:23:57 crc kubenswrapper[5029]: E0313 21:23:57.601336 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.152242 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557284-w9ptx"] Mar 13 21:24:00 crc kubenswrapper[5029]: E0313 21:24:00.154107 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerName="registry-server" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.154131 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerName="registry-server" Mar 13 21:24:00 crc kubenswrapper[5029]: E0313 21:24:00.154184 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerName="extract-content" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.154195 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerName="extract-content" Mar 13 21:24:00 crc kubenswrapper[5029]: E0313 21:24:00.154232 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerName="extract-utilities" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.154245 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerName="extract-utilities" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.154515 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5c52e2-4c52-45a1-aa12-27a1e2effaa0" containerName="registry-server" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.155616 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557284-w9ptx" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.158115 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.159143 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.159162 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.164747 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557284-w9ptx"] Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.310835 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mk6k\" (UniqueName: \"kubernetes.io/projected/306bd8a5-0ed2-4533-98db-9b69bfed7710-kube-api-access-7mk6k\") pod \"auto-csr-approver-29557284-w9ptx\" (UID: \"306bd8a5-0ed2-4533-98db-9b69bfed7710\") " pod="openshift-infra/auto-csr-approver-29557284-w9ptx" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.414224 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mk6k\" (UniqueName: \"kubernetes.io/projected/306bd8a5-0ed2-4533-98db-9b69bfed7710-kube-api-access-7mk6k\") pod \"auto-csr-approver-29557284-w9ptx\" (UID: \"306bd8a5-0ed2-4533-98db-9b69bfed7710\") " pod="openshift-infra/auto-csr-approver-29557284-w9ptx" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.438189 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mk6k\" (UniqueName: \"kubernetes.io/projected/306bd8a5-0ed2-4533-98db-9b69bfed7710-kube-api-access-7mk6k\") pod \"auto-csr-approver-29557284-w9ptx\" (UID: \"306bd8a5-0ed2-4533-98db-9b69bfed7710\") " pod="openshift-infra/auto-csr-approver-29557284-w9ptx" Mar 13 21:24:00 crc kubenswrapper[5029]: I0313 21:24:00.476838 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557284-w9ptx" Mar 13 21:24:01 crc kubenswrapper[5029]: I0313 21:24:01.034479 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557284-w9ptx"] Mar 13 21:24:01 crc kubenswrapper[5029]: I0313 21:24:01.587796 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557284-w9ptx" event={"ID":"306bd8a5-0ed2-4533-98db-9b69bfed7710","Type":"ContainerStarted","Data":"85fe27437c73252f5645b26437fcfc377de3a199e1e84b9d5459c5bdf4ad45b8"} Mar 13 21:24:03 crc kubenswrapper[5029]: I0313 21:24:03.612061 5029 generic.go:334] "Generic (PLEG): container finished" podID="306bd8a5-0ed2-4533-98db-9b69bfed7710" containerID="8c29d78cc650b5ae4a45123b48f1d7f31c44783669965a1702475ea08048fcf5" exitCode=0 Mar 13 21:24:03 crc kubenswrapper[5029]: I0313 21:24:03.612149 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557284-w9ptx" event={"ID":"306bd8a5-0ed2-4533-98db-9b69bfed7710","Type":"ContainerDied","Data":"8c29d78cc650b5ae4a45123b48f1d7f31c44783669965a1702475ea08048fcf5"} Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.281257 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkbkr"] Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.324834 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkbkr"] Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.327284 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.345196 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557284-w9ptx" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.436236 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mk6k\" (UniqueName: \"kubernetes.io/projected/306bd8a5-0ed2-4533-98db-9b69bfed7710-kube-api-access-7mk6k\") pod \"306bd8a5-0ed2-4533-98db-9b69bfed7710\" (UID: \"306bd8a5-0ed2-4533-98db-9b69bfed7710\") " Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.437402 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2wwj\" (UniqueName: \"kubernetes.io/projected/de640860-833e-4e83-8069-a0b0e2096fc0-kube-api-access-n2wwj\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.438090 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-utilities\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.438473 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-catalog-content\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.457295 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306bd8a5-0ed2-4533-98db-9b69bfed7710-kube-api-access-7mk6k" (OuterVolumeSpecName: "kube-api-access-7mk6k") pod "306bd8a5-0ed2-4533-98db-9b69bfed7710" (UID: "306bd8a5-0ed2-4533-98db-9b69bfed7710"). InnerVolumeSpecName "kube-api-access-7mk6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.540898 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-catalog-content\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.541032 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2wwj\" (UniqueName: \"kubernetes.io/projected/de640860-833e-4e83-8069-a0b0e2096fc0-kube-api-access-n2wwj\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.541088 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-utilities\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.541273 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mk6k\" (UniqueName: \"kubernetes.io/projected/306bd8a5-0ed2-4533-98db-9b69bfed7710-kube-api-access-7mk6k\") on node \"crc\" DevicePath \"\"" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.542018 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-catalog-content\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.542286 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-utilities\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.562955 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2wwj\" (UniqueName: \"kubernetes.io/projected/de640860-833e-4e83-8069-a0b0e2096fc0-kube-api-access-n2wwj\") pod \"redhat-operators-pkbkr\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.641652 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557284-w9ptx" event={"ID":"306bd8a5-0ed2-4533-98db-9b69bfed7710","Type":"ContainerDied","Data":"85fe27437c73252f5645b26437fcfc377de3a199e1e84b9d5459c5bdf4ad45b8"} Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.641715 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85fe27437c73252f5645b26437fcfc377de3a199e1e84b9d5459c5bdf4ad45b8" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.642240 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557284-w9ptx" Mar 13 21:24:05 crc kubenswrapper[5029]: I0313 21:24:05.673374 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:06 crc kubenswrapper[5029]: I0313 21:24:06.231149 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkbkr"] Mar 13 21:24:06 crc kubenswrapper[5029]: W0313 21:24:06.236963 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde640860_833e_4e83_8069_a0b0e2096fc0.slice/crio-cdad0dc6f5ce658402f7bcec91c09a5fd527c32a34ebe7e1615b1ef6470eb782 WatchSource:0}: Error finding container cdad0dc6f5ce658402f7bcec91c09a5fd527c32a34ebe7e1615b1ef6470eb782: Status 404 returned error can't find the container with id cdad0dc6f5ce658402f7bcec91c09a5fd527c32a34ebe7e1615b1ef6470eb782 Mar 13 21:24:06 crc kubenswrapper[5029]: I0313 21:24:06.444934 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557278-6qsgh"] Mar 13 21:24:06 crc kubenswrapper[5029]: I0313 21:24:06.457698 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557278-6qsgh"] Mar 13 21:24:06 crc kubenswrapper[5029]: I0313 21:24:06.614636 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c11dea9-301a-41b7-a829-36cbdd1e7158" path="/var/lib/kubelet/pods/1c11dea9-301a-41b7-a829-36cbdd1e7158/volumes" Mar 13 21:24:06 crc kubenswrapper[5029]: I0313 21:24:06.653285 5029 generic.go:334] "Generic (PLEG): container finished" podID="de640860-833e-4e83-8069-a0b0e2096fc0" containerID="e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43" exitCode=0 Mar 13 21:24:06 crc kubenswrapper[5029]: I0313 21:24:06.653347 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkbkr" event={"ID":"de640860-833e-4e83-8069-a0b0e2096fc0","Type":"ContainerDied","Data":"e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43"} Mar 13 21:24:06 crc kubenswrapper[5029]: I0313 21:24:06.653380 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkbkr" event={"ID":"de640860-833e-4e83-8069-a0b0e2096fc0","Type":"ContainerStarted","Data":"cdad0dc6f5ce658402f7bcec91c09a5fd527c32a34ebe7e1615b1ef6470eb782"} Mar 13 21:24:07 crc kubenswrapper[5029]: I0313 21:24:07.664268 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkbkr" event={"ID":"de640860-833e-4e83-8069-a0b0e2096fc0","Type":"ContainerStarted","Data":"dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538"} Mar 13 21:24:10 crc kubenswrapper[5029]: I0313 21:24:10.610012 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:24:10 crc kubenswrapper[5029]: E0313 21:24:10.611273 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:24:13 crc kubenswrapper[5029]: I0313 21:24:13.619731 5029 scope.go:117] "RemoveContainer" containerID="96ef8784deff088c39b0cd00412c5f6b75bab7abec4302cfedb16e6dafc1e6ab" Mar 13 21:24:13 crc kubenswrapper[5029]: I0313 21:24:13.749530 5029 generic.go:334] "Generic (PLEG): container finished" podID="de640860-833e-4e83-8069-a0b0e2096fc0" containerID="dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538" exitCode=0 Mar 13 21:24:13 crc kubenswrapper[5029]: I0313 21:24:13.749581 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkbkr" event={"ID":"de640860-833e-4e83-8069-a0b0e2096fc0","Type":"ContainerDied","Data":"dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538"} Mar 13 21:24:14 crc kubenswrapper[5029]: I0313 21:24:14.761419 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkbkr" event={"ID":"de640860-833e-4e83-8069-a0b0e2096fc0","Type":"ContainerStarted","Data":"bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30"} Mar 13 21:24:14 crc kubenswrapper[5029]: I0313 21:24:14.792333 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkbkr" podStartSLOduration=2.274050798 podStartE2EDuration="9.792304058s" podCreationTimestamp="2026-03-13 21:24:05 +0000 UTC" firstStartedPulling="2026-03-13 21:24:06.656556946 +0000 UTC m=+3406.672639349" lastFinishedPulling="2026-03-13 21:24:14.174810206 +0000 UTC m=+3414.190892609" observedRunningTime="2026-03-13 21:24:14.780460004 +0000 UTC m=+3414.796542417" watchObservedRunningTime="2026-03-13 21:24:14.792304058 +0000 UTC m=+3414.808386461" Mar 13 21:24:15 crc kubenswrapper[5029]: I0313 21:24:15.673410 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:15 crc kubenswrapper[5029]: I0313 21:24:15.673882 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:16 crc kubenswrapper[5029]: I0313 21:24:16.722535 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pkbkr" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="registry-server" probeResult="failure" output=< Mar 13 21:24:16 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 21:24:16 crc kubenswrapper[5029]: > Mar 13 21:24:23 crc kubenswrapper[5029]: I0313 21:24:23.600122 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:24:23 crc kubenswrapper[5029]: E0313 21:24:23.600985 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:24:25 crc kubenswrapper[5029]: I0313 21:24:25.728967 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:25 crc kubenswrapper[5029]: I0313 21:24:25.807956 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:26 crc kubenswrapper[5029]: I0313 21:24:26.006948 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkbkr"] Mar 13 21:24:26 crc kubenswrapper[5029]: I0313 21:24:26.891066 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pkbkr" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="registry-server" containerID="cri-o://bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30" gracePeriod=2 Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.687991 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.815467 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2wwj\" (UniqueName: \"kubernetes.io/projected/de640860-833e-4e83-8069-a0b0e2096fc0-kube-api-access-n2wwj\") pod \"de640860-833e-4e83-8069-a0b0e2096fc0\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.815811 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-utilities\") pod \"de640860-833e-4e83-8069-a0b0e2096fc0\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.815962 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-catalog-content\") pod \"de640860-833e-4e83-8069-a0b0e2096fc0\" (UID: \"de640860-833e-4e83-8069-a0b0e2096fc0\") " Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.817288 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-utilities" (OuterVolumeSpecName: "utilities") pod "de640860-833e-4e83-8069-a0b0e2096fc0" (UID: "de640860-833e-4e83-8069-a0b0e2096fc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.824960 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de640860-833e-4e83-8069-a0b0e2096fc0-kube-api-access-n2wwj" (OuterVolumeSpecName: "kube-api-access-n2wwj") pod "de640860-833e-4e83-8069-a0b0e2096fc0" (UID: "de640860-833e-4e83-8069-a0b0e2096fc0"). InnerVolumeSpecName "kube-api-access-n2wwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.905026 5029 generic.go:334] "Generic (PLEG): container finished" podID="de640860-833e-4e83-8069-a0b0e2096fc0" containerID="bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30" exitCode=0 Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.905090 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkbkr" event={"ID":"de640860-833e-4e83-8069-a0b0e2096fc0","Type":"ContainerDied","Data":"bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30"} Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.905157 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkbkr" event={"ID":"de640860-833e-4e83-8069-a0b0e2096fc0","Type":"ContainerDied","Data":"cdad0dc6f5ce658402f7bcec91c09a5fd527c32a34ebe7e1615b1ef6470eb782"} Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.905158 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkbkr" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.905185 5029 scope.go:117] "RemoveContainer" containerID="bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.919391 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2wwj\" (UniqueName: \"kubernetes.io/projected/de640860-833e-4e83-8069-a0b0e2096fc0-kube-api-access-n2wwj\") on node \"crc\" DevicePath \"\"" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.919438 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.933000 5029 scope.go:117] "RemoveContainer" containerID="dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.963425 5029 scope.go:117] "RemoveContainer" containerID="e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43" Mar 13 21:24:27 crc kubenswrapper[5029]: I0313 21:24:27.973588 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de640860-833e-4e83-8069-a0b0e2096fc0" (UID: "de640860-833e-4e83-8069-a0b0e2096fc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.017059 5029 scope.go:117] "RemoveContainer" containerID="bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30" Mar 13 21:24:28 crc kubenswrapper[5029]: E0313 21:24:28.017678 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30\": container with ID starting with bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30 not found: ID does not exist" containerID="bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30" Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.017745 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30"} err="failed to get container status \"bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30\": rpc error: code = NotFound desc = could not find container \"bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30\": container with ID starting with bb8de9de7c8b744614a85d0ca79a2ffca23695de36098f3c761a2d7844f1bc30 not found: ID does not exist" Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.017777 5029 scope.go:117] "RemoveContainer" containerID="dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538" Mar 13 21:24:28 crc kubenswrapper[5029]: E0313 21:24:28.018395 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538\": container with ID starting with dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538 not found: ID does not exist" containerID="dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538" Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.018445 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538"} err="failed to get container status \"dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538\": rpc error: code = NotFound desc = could not find container \"dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538\": container with ID starting with dc35da38c63409922fc3d1935d2f19d02c4d9459d8013e1dc535efc9c60b5538 not found: ID does not exist" Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.018478 5029 scope.go:117] "RemoveContainer" containerID="e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43" Mar 13 21:24:28 crc kubenswrapper[5029]: E0313 21:24:28.019122 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43\": container with ID starting with e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43 not found: ID does not exist" containerID="e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43" Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.019162 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43"} err="failed to get container status \"e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43\": rpc error: code = NotFound desc = could not find container \"e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43\": container with ID starting with e40545fd0cfb55505fb78f47ced2b66995be7a6e724b051e11874e28f00f2e43 not found: ID does not exist" Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.021907 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de640860-833e-4e83-8069-a0b0e2096fc0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.244774 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkbkr"] Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.256951 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pkbkr"] Mar 13 21:24:28 crc kubenswrapper[5029]: I0313 21:24:28.611437 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" path="/var/lib/kubelet/pods/de640860-833e-4e83-8069-a0b0e2096fc0/volumes" Mar 13 21:24:34 crc kubenswrapper[5029]: I0313 21:24:34.600566 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:24:34 crc kubenswrapper[5029]: E0313 21:24:34.601942 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:24:48 crc kubenswrapper[5029]: I0313 21:24:48.601712 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:24:48 crc kubenswrapper[5029]: E0313 21:24:48.602947 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:25:02 crc kubenswrapper[5029]: I0313 21:25:02.599762 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:25:02 crc kubenswrapper[5029]: E0313 21:25:02.600769 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:25:13 crc kubenswrapper[5029]: I0313 21:25:13.599765 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:25:13 crc kubenswrapper[5029]: E0313 21:25:13.600601 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:25:24 crc kubenswrapper[5029]: I0313 21:25:24.599952 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:25:24 crc kubenswrapper[5029]: E0313 21:25:24.600848 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:25:35 crc kubenswrapper[5029]: I0313 21:25:35.599669 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:25:35 crc kubenswrapper[5029]: E0313 21:25:35.600610 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:25:49 crc kubenswrapper[5029]: I0313 21:25:49.600510 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:25:49 crc kubenswrapper[5029]: E0313 21:25:49.601647 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.155908 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557286-wqb9q"] Mar 13 21:26:00 crc kubenswrapper[5029]: E0313 21:26:00.157188 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306bd8a5-0ed2-4533-98db-9b69bfed7710" containerName="oc" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.157211 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="306bd8a5-0ed2-4533-98db-9b69bfed7710" containerName="oc" Mar 13 21:26:00 crc kubenswrapper[5029]: E0313 21:26:00.157229 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="extract-content" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.157237 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="extract-content" Mar 13 21:26:00 crc kubenswrapper[5029]: E0313 21:26:00.157270 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="extract-utilities" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.157280 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="extract-utilities" Mar 13 21:26:00 crc kubenswrapper[5029]: E0313 21:26:00.157294 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="registry-server" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.157303 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="registry-server" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.157514 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="306bd8a5-0ed2-4533-98db-9b69bfed7710" containerName="oc" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.157544 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="de640860-833e-4e83-8069-a0b0e2096fc0" containerName="registry-server" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.158416 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557286-wqb9q" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.160901 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.161561 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.166058 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.169585 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557286-wqb9q"] Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.206026 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrk6k\" (UniqueName: \"kubernetes.io/projected/abee0019-846f-4623-8e18-bafad404fb33-kube-api-access-mrk6k\") pod \"auto-csr-approver-29557286-wqb9q\" (UID: \"abee0019-846f-4623-8e18-bafad404fb33\") " pod="openshift-infra/auto-csr-approver-29557286-wqb9q" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.308297 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrk6k\" (UniqueName: \"kubernetes.io/projected/abee0019-846f-4623-8e18-bafad404fb33-kube-api-access-mrk6k\") pod \"auto-csr-approver-29557286-wqb9q\" (UID: \"abee0019-846f-4623-8e18-bafad404fb33\") " pod="openshift-infra/auto-csr-approver-29557286-wqb9q" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.339210 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrk6k\" (UniqueName: \"kubernetes.io/projected/abee0019-846f-4623-8e18-bafad404fb33-kube-api-access-mrk6k\") pod \"auto-csr-approver-29557286-wqb9q\" (UID: \"abee0019-846f-4623-8e18-bafad404fb33\") " pod="openshift-infra/auto-csr-approver-29557286-wqb9q" Mar 13 21:26:00 crc kubenswrapper[5029]: I0313 21:26:00.479946 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557286-wqb9q" Mar 13 21:26:01 crc kubenswrapper[5029]: I0313 21:26:01.024700 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557286-wqb9q"] Mar 13 21:26:01 crc kubenswrapper[5029]: I0313 21:26:01.600183 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:26:01 crc kubenswrapper[5029]: E0313 21:26:01.600733 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:26:01 crc kubenswrapper[5029]: I0313 21:26:01.926458 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557286-wqb9q" event={"ID":"abee0019-846f-4623-8e18-bafad404fb33","Type":"ContainerStarted","Data":"0c413e0e0b56d425f0ecf5122db53b25efdb0288572f412891cd7344d8c48b63"} Mar 13 21:26:02 crc kubenswrapper[5029]: I0313 21:26:02.938028 5029 generic.go:334] "Generic (PLEG): container finished" podID="abee0019-846f-4623-8e18-bafad404fb33" containerID="1611f3f281832004d5bd5ccaee512e8349701d201977979a47e7096f8a31814d" exitCode=0 Mar 13 21:26:02 crc kubenswrapper[5029]: I0313 21:26:02.938151 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557286-wqb9q" event={"ID":"abee0019-846f-4623-8e18-bafad404fb33","Type":"ContainerDied","Data":"1611f3f281832004d5bd5ccaee512e8349701d201977979a47e7096f8a31814d"} Mar 13 21:26:04 crc kubenswrapper[5029]: I0313 21:26:04.533053 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557286-wqb9q" Mar 13 21:26:04 crc kubenswrapper[5029]: I0313 21:26:04.627908 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrk6k\" (UniqueName: \"kubernetes.io/projected/abee0019-846f-4623-8e18-bafad404fb33-kube-api-access-mrk6k\") pod \"abee0019-846f-4623-8e18-bafad404fb33\" (UID: \"abee0019-846f-4623-8e18-bafad404fb33\") " Mar 13 21:26:04 crc kubenswrapper[5029]: I0313 21:26:04.635994 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abee0019-846f-4623-8e18-bafad404fb33-kube-api-access-mrk6k" (OuterVolumeSpecName: "kube-api-access-mrk6k") pod "abee0019-846f-4623-8e18-bafad404fb33" (UID: "abee0019-846f-4623-8e18-bafad404fb33"). InnerVolumeSpecName "kube-api-access-mrk6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:26:04 crc kubenswrapper[5029]: I0313 21:26:04.731630 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrk6k\" (UniqueName: \"kubernetes.io/projected/abee0019-846f-4623-8e18-bafad404fb33-kube-api-access-mrk6k\") on node \"crc\" DevicePath \"\"" Mar 13 21:26:04 crc kubenswrapper[5029]: I0313 21:26:04.960503 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557286-wqb9q" event={"ID":"abee0019-846f-4623-8e18-bafad404fb33","Type":"ContainerDied","Data":"0c413e0e0b56d425f0ecf5122db53b25efdb0288572f412891cd7344d8c48b63"} Mar 13 21:26:04 crc kubenswrapper[5029]: I0313 21:26:04.960552 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c413e0e0b56d425f0ecf5122db53b25efdb0288572f412891cd7344d8c48b63" Mar 13 21:26:04 crc kubenswrapper[5029]: I0313 21:26:04.960596 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557286-wqb9q" Mar 13 21:26:05 crc kubenswrapper[5029]: I0313 21:26:05.624359 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557280-t7jqq"] Mar 13 21:26:05 crc kubenswrapper[5029]: I0313 21:26:05.635289 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557280-t7jqq"] Mar 13 21:26:06 crc kubenswrapper[5029]: I0313 21:26:06.613012 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f515de46-72f7-4aae-a11d-a7aa2aa3e40a" path="/var/lib/kubelet/pods/f515de46-72f7-4aae-a11d-a7aa2aa3e40a/volumes" Mar 13 21:26:13 crc kubenswrapper[5029]: I0313 21:26:13.793285 5029 scope.go:117] "RemoveContainer" containerID="e84cb40862f3a514da164f491b74683a39d46f9cde544c41ffbdb9228900f3c1" Mar 13 21:26:16 crc kubenswrapper[5029]: I0313 21:26:16.600510 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:26:17 crc kubenswrapper[5029]: I0313 21:26:17.090674 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"98f20ab9a14ee8f5298136685ff74193f1b99413c23d8f965910713c88de0c7f"} Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.794951 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rftp4"] Mar 13 21:26:18 crc kubenswrapper[5029]: E0313 21:26:18.796451 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abee0019-846f-4623-8e18-bafad404fb33" containerName="oc" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.796471 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="abee0019-846f-4623-8e18-bafad404fb33" containerName="oc" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.796717 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="abee0019-846f-4623-8e18-bafad404fb33" containerName="oc" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.801331 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.815933 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rftp4"] Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.858053 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e88689-9871-4cd2-9d9e-23b3487a7957-catalog-content\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.858239 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e88689-9871-4cd2-9d9e-23b3487a7957-utilities\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.858332 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcncr\" (UniqueName: \"kubernetes.io/projected/b4e88689-9871-4cd2-9d9e-23b3487a7957-kube-api-access-zcncr\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.960417 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e88689-9871-4cd2-9d9e-23b3487a7957-utilities\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.960511 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcncr\" (UniqueName: \"kubernetes.io/projected/b4e88689-9871-4cd2-9d9e-23b3487a7957-kube-api-access-zcncr\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.960568 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e88689-9871-4cd2-9d9e-23b3487a7957-catalog-content\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.961045 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e88689-9871-4cd2-9d9e-23b3487a7957-catalog-content\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.961275 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e88689-9871-4cd2-9d9e-23b3487a7957-utilities\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:18 crc kubenswrapper[5029]: I0313 21:26:18.983374 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcncr\" (UniqueName: \"kubernetes.io/projected/b4e88689-9871-4cd2-9d9e-23b3487a7957-kube-api-access-zcncr\") pod \"community-operators-rftp4\" (UID: \"b4e88689-9871-4cd2-9d9e-23b3487a7957\") " pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:19 crc kubenswrapper[5029]: I0313 21:26:19.128161 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:19 crc kubenswrapper[5029]: I0313 21:26:19.783458 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rftp4"] Mar 13 21:26:20 crc kubenswrapper[5029]: I0313 21:26:20.144295 5029 generic.go:334] "Generic (PLEG): container finished" podID="b4e88689-9871-4cd2-9d9e-23b3487a7957" containerID="42e64a04d35bbd4317a4ea5fc69d28b538711ac925da24cedd7440468cfc9725" exitCode=0 Mar 13 21:26:20 crc kubenswrapper[5029]: I0313 21:26:20.144671 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftp4" event={"ID":"b4e88689-9871-4cd2-9d9e-23b3487a7957","Type":"ContainerDied","Data":"42e64a04d35bbd4317a4ea5fc69d28b538711ac925da24cedd7440468cfc9725"} Mar 13 21:26:20 crc kubenswrapper[5029]: I0313 21:26:20.144713 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftp4" event={"ID":"b4e88689-9871-4cd2-9d9e-23b3487a7957","Type":"ContainerStarted","Data":"d8c4fba65b5117bb6abd83d0eafc1307c27a81d139ca36595c062a2013a785fd"} Mar 13 21:26:26 crc kubenswrapper[5029]: I0313 21:26:26.224085 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftp4" event={"ID":"b4e88689-9871-4cd2-9d9e-23b3487a7957","Type":"ContainerStarted","Data":"c702e60f9cd0eed7b22b0d7c520f3358ac520ea238de7a056b6f97dae691a5f7"} Mar 13 21:26:27 crc kubenswrapper[5029]: I0313 21:26:27.236469 5029 generic.go:334] "Generic (PLEG): container finished" podID="b4e88689-9871-4cd2-9d9e-23b3487a7957" containerID="c702e60f9cd0eed7b22b0d7c520f3358ac520ea238de7a056b6f97dae691a5f7" exitCode=0 Mar 13 21:26:27 crc kubenswrapper[5029]: I0313 21:26:27.236882 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftp4" event={"ID":"b4e88689-9871-4cd2-9d9e-23b3487a7957","Type":"ContainerDied","Data":"c702e60f9cd0eed7b22b0d7c520f3358ac520ea238de7a056b6f97dae691a5f7"} Mar 13 21:26:28 crc kubenswrapper[5029]: I0313 21:26:28.250004 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftp4" event={"ID":"b4e88689-9871-4cd2-9d9e-23b3487a7957","Type":"ContainerStarted","Data":"f4a40622024c961b3610415b4f68e14a348a96591fd277cc588487c27c720775"} Mar 13 21:26:28 crc kubenswrapper[5029]: I0313 21:26:28.270843 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rftp4" podStartSLOduration=2.761448252 podStartE2EDuration="10.27082197s" podCreationTimestamp="2026-03-13 21:26:18 +0000 UTC" firstStartedPulling="2026-03-13 21:26:20.146321736 +0000 UTC m=+3540.162404139" lastFinishedPulling="2026-03-13 21:26:27.655695454 +0000 UTC m=+3547.671777857" observedRunningTime="2026-03-13 21:26:28.268945269 +0000 UTC m=+3548.285027692" watchObservedRunningTime="2026-03-13 21:26:28.27082197 +0000 UTC m=+3548.286904373" Mar 13 21:26:29 crc kubenswrapper[5029]: I0313 21:26:29.128697 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:29 crc kubenswrapper[5029]: I0313 21:26:29.129166 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:30 crc kubenswrapper[5029]: I0313 21:26:30.182921 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rftp4" podUID="b4e88689-9871-4cd2-9d9e-23b3487a7957" containerName="registry-server" probeResult="failure" output=< Mar 13 21:26:30 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 21:26:30 crc kubenswrapper[5029]: > Mar 13 21:26:39 crc kubenswrapper[5029]: I0313 21:26:39.182793 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:39 crc kubenswrapper[5029]: I0313 21:26:39.243469 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rftp4" Mar 13 21:26:39 crc kubenswrapper[5029]: I0313 21:26:39.363765 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rftp4"] Mar 13 21:26:39 crc kubenswrapper[5029]: I0313 21:26:39.434495 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4ssbt"] Mar 13 21:26:39 crc kubenswrapper[5029]: I0313 21:26:39.434751 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4ssbt" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerName="registry-server" containerID="cri-o://f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23" gracePeriod=2 Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.297074 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ssbt" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.400300 5029 generic.go:334] "Generic (PLEG): container finished" podID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerID="f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23" exitCode=0 Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.400545 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ssbt" event={"ID":"d5874fee-3658-412b-95c2-0cbdf9da9799","Type":"ContainerDied","Data":"f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23"} Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.400681 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ssbt" event={"ID":"d5874fee-3658-412b-95c2-0cbdf9da9799","Type":"ContainerDied","Data":"17d5fb7d0b77f6a885d7a8643b18e63ba6f5084adbf327fe3ec33b7de06e6bfc"} Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.400712 5029 scope.go:117] "RemoveContainer" containerID="f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.401214 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ssbt" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.430797 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tkfx\" (UniqueName: \"kubernetes.io/projected/d5874fee-3658-412b-95c2-0cbdf9da9799-kube-api-access-7tkfx\") pod \"d5874fee-3658-412b-95c2-0cbdf9da9799\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.431328 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-catalog-content\") pod \"d5874fee-3658-412b-95c2-0cbdf9da9799\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.431559 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-utilities\") pod \"d5874fee-3658-412b-95c2-0cbdf9da9799\" (UID: \"d5874fee-3658-412b-95c2-0cbdf9da9799\") " Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.436792 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-utilities" (OuterVolumeSpecName: "utilities") pod "d5874fee-3658-412b-95c2-0cbdf9da9799" (UID: "d5874fee-3658-412b-95c2-0cbdf9da9799"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.446467 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5874fee-3658-412b-95c2-0cbdf9da9799-kube-api-access-7tkfx" (OuterVolumeSpecName: "kube-api-access-7tkfx") pod "d5874fee-3658-412b-95c2-0cbdf9da9799" (UID: "d5874fee-3658-412b-95c2-0cbdf9da9799"). InnerVolumeSpecName "kube-api-access-7tkfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.473054 5029 scope.go:117] "RemoveContainer" containerID="2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.536690 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tkfx\" (UniqueName: \"kubernetes.io/projected/d5874fee-3658-412b-95c2-0cbdf9da9799-kube-api-access-7tkfx\") on node \"crc\" DevicePath \"\"" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.536720 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.568360 5029 scope.go:117] "RemoveContainer" containerID="0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.612234 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5874fee-3658-412b-95c2-0cbdf9da9799" (UID: "d5874fee-3658-412b-95c2-0cbdf9da9799"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.624237 5029 scope.go:117] "RemoveContainer" containerID="f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23" Mar 13 21:26:40 crc kubenswrapper[5029]: E0313 21:26:40.625463 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23\": container with ID starting with f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23 not found: ID does not exist" containerID="f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.625503 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23"} err="failed to get container status \"f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23\": rpc error: code = NotFound desc = could not find container \"f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23\": container with ID starting with f821114da276d418f2c3b9389eee8031e69155c3aaa033878d90e1aba061bd23 not found: ID does not exist" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.625535 5029 scope.go:117] "RemoveContainer" containerID="2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc" Mar 13 21:26:40 crc kubenswrapper[5029]: E0313 21:26:40.626221 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc\": container with ID starting with 2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc not found: ID does not exist" containerID="2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.626256 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc"} err="failed to get container status \"2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc\": rpc error: code = NotFound desc = could not find container \"2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc\": container with ID starting with 2d6bf4ed9678fc8bb8772ad02357c7abb143e39d4bcb0b99a67b49a674f9c4fc not found: ID does not exist" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.626276 5029 scope.go:117] "RemoveContainer" containerID="0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b" Mar 13 21:26:40 crc kubenswrapper[5029]: E0313 21:26:40.626693 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b\": container with ID starting with 0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b not found: ID does not exist" containerID="0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.626724 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b"} err="failed to get container status \"0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b\": rpc error: code = NotFound desc = could not find container \"0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b\": container with ID starting with 0d281b6c9b65d0f833e21150c73d6bc0805d5a428ab6a08879d822741b7fe79b not found: ID does not exist" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.639210 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5874fee-3658-412b-95c2-0cbdf9da9799-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.743376 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4ssbt"] Mar 13 21:26:40 crc kubenswrapper[5029]: I0313 21:26:40.760098 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4ssbt"] Mar 13 21:26:42 crc kubenswrapper[5029]: I0313 21:26:42.612820 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" path="/var/lib/kubelet/pods/d5874fee-3658-412b-95c2-0cbdf9da9799/volumes" Mar 13 21:26:47 crc kubenswrapper[5029]: I0313 21:26:47.868477 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k86dh"] Mar 13 21:26:47 crc kubenswrapper[5029]: E0313 21:26:47.870330 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerName="extract-content" Mar 13 21:26:47 crc kubenswrapper[5029]: I0313 21:26:47.870349 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerName="extract-content" Mar 13 21:26:47 crc kubenswrapper[5029]: E0313 21:26:47.870360 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerName="extract-utilities" Mar 13 21:26:47 crc kubenswrapper[5029]: I0313 21:26:47.870368 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerName="extract-utilities" Mar 13 21:26:47 crc kubenswrapper[5029]: E0313 21:26:47.870389 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerName="registry-server" Mar 13 21:26:47 crc kubenswrapper[5029]: I0313 21:26:47.870398 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerName="registry-server" Mar 13 21:26:47 crc kubenswrapper[5029]: I0313 21:26:47.872550 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5874fee-3658-412b-95c2-0cbdf9da9799" containerName="registry-server" Mar 13 21:26:47 crc kubenswrapper[5029]: I0313 21:26:47.875665 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:47 crc kubenswrapper[5029]: I0313 21:26:47.887050 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k86dh"] Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.018977 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-utilities\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.019114 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-catalog-content\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.019226 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8tdh\" (UniqueName: \"kubernetes.io/projected/4ea33e83-b688-45ab-9e06-565d288e628d-kube-api-access-w8tdh\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.120798 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-utilities\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.121188 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-catalog-content\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.121399 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-utilities\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.121400 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8tdh\" (UniqueName: \"kubernetes.io/projected/4ea33e83-b688-45ab-9e06-565d288e628d-kube-api-access-w8tdh\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.121814 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-catalog-content\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.144932 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8tdh\" (UniqueName: \"kubernetes.io/projected/4ea33e83-b688-45ab-9e06-565d288e628d-kube-api-access-w8tdh\") pod \"redhat-marketplace-k86dh\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.195136 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:48 crc kubenswrapper[5029]: I0313 21:26:48.757397 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k86dh"] Mar 13 21:26:49 crc kubenswrapper[5029]: I0313 21:26:49.499829 5029 generic.go:334] "Generic (PLEG): container finished" podID="4ea33e83-b688-45ab-9e06-565d288e628d" containerID="83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f" exitCode=0 Mar 13 21:26:49 crc kubenswrapper[5029]: I0313 21:26:49.499921 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k86dh" event={"ID":"4ea33e83-b688-45ab-9e06-565d288e628d","Type":"ContainerDied","Data":"83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f"} Mar 13 21:26:49 crc kubenswrapper[5029]: I0313 21:26:49.500468 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k86dh" event={"ID":"4ea33e83-b688-45ab-9e06-565d288e628d","Type":"ContainerStarted","Data":"4e78f7683db8c0bf7805b5f4ef50e9376415d1ccf4cc70d33ea8080bc7d64dff"} Mar 13 21:26:50 crc kubenswrapper[5029]: I0313 21:26:50.537379 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k86dh" event={"ID":"4ea33e83-b688-45ab-9e06-565d288e628d","Type":"ContainerStarted","Data":"47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f"} Mar 13 21:26:52 crc kubenswrapper[5029]: I0313 21:26:52.558732 5029 generic.go:334] "Generic (PLEG): container finished" podID="4ea33e83-b688-45ab-9e06-565d288e628d" containerID="47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f" exitCode=0 Mar 13 21:26:52 crc kubenswrapper[5029]: I0313 21:26:52.558935 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k86dh" event={"ID":"4ea33e83-b688-45ab-9e06-565d288e628d","Type":"ContainerDied","Data":"47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f"} Mar 13 21:26:53 crc kubenswrapper[5029]: I0313 21:26:53.583452 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k86dh" event={"ID":"4ea33e83-b688-45ab-9e06-565d288e628d","Type":"ContainerStarted","Data":"0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966"} Mar 13 21:26:53 crc kubenswrapper[5029]: I0313 21:26:53.610439 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k86dh" podStartSLOduration=3.129509308 podStartE2EDuration="6.610408204s" podCreationTimestamp="2026-03-13 21:26:47 +0000 UTC" firstStartedPulling="2026-03-13 21:26:49.503146482 +0000 UTC m=+3569.519228885" lastFinishedPulling="2026-03-13 21:26:52.984045388 +0000 UTC m=+3573.000127781" observedRunningTime="2026-03-13 21:26:53.606049324 +0000 UTC m=+3573.622131727" watchObservedRunningTime="2026-03-13 21:26:53.610408204 +0000 UTC m=+3573.626490607" Mar 13 21:26:58 crc kubenswrapper[5029]: I0313 21:26:58.195637 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:58 crc kubenswrapper[5029]: I0313 21:26:58.196297 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:58 crc kubenswrapper[5029]: I0313 21:26:58.250420 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:58 crc kubenswrapper[5029]: I0313 21:26:58.951511 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:26:59 crc kubenswrapper[5029]: I0313 21:26:59.024147 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k86dh"] Mar 13 21:27:00 crc kubenswrapper[5029]: I0313 21:27:00.718833 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k86dh" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" containerName="registry-server" containerID="cri-o://0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966" gracePeriod=2 Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.535167 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.568690 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-utilities\") pod \"4ea33e83-b688-45ab-9e06-565d288e628d\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.569087 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8tdh\" (UniqueName: \"kubernetes.io/projected/4ea33e83-b688-45ab-9e06-565d288e628d-kube-api-access-w8tdh\") pod \"4ea33e83-b688-45ab-9e06-565d288e628d\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.569435 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-catalog-content\") pod \"4ea33e83-b688-45ab-9e06-565d288e628d\" (UID: \"4ea33e83-b688-45ab-9e06-565d288e628d\") " Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.570657 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-utilities" (OuterVolumeSpecName: "utilities") pod "4ea33e83-b688-45ab-9e06-565d288e628d" (UID: "4ea33e83-b688-45ab-9e06-565d288e628d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.570994 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.578967 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea33e83-b688-45ab-9e06-565d288e628d-kube-api-access-w8tdh" (OuterVolumeSpecName: "kube-api-access-w8tdh") pod "4ea33e83-b688-45ab-9e06-565d288e628d" (UID: "4ea33e83-b688-45ab-9e06-565d288e628d"). InnerVolumeSpecName "kube-api-access-w8tdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.607810 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ea33e83-b688-45ab-9e06-565d288e628d" (UID: "4ea33e83-b688-45ab-9e06-565d288e628d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.673614 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8tdh\" (UniqueName: \"kubernetes.io/projected/4ea33e83-b688-45ab-9e06-565d288e628d-kube-api-access-w8tdh\") on node \"crc\" DevicePath \"\"" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.673660 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea33e83-b688-45ab-9e06-565d288e628d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.733688 5029 generic.go:334] "Generic (PLEG): container finished" podID="4ea33e83-b688-45ab-9e06-565d288e628d" containerID="0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966" exitCode=0 Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.733744 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k86dh" event={"ID":"4ea33e83-b688-45ab-9e06-565d288e628d","Type":"ContainerDied","Data":"0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966"} Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.733796 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k86dh" event={"ID":"4ea33e83-b688-45ab-9e06-565d288e628d","Type":"ContainerDied","Data":"4e78f7683db8c0bf7805b5f4ef50e9376415d1ccf4cc70d33ea8080bc7d64dff"} Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.733813 5029 scope.go:117] "RemoveContainer" containerID="0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.734030 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k86dh" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.767657 5029 scope.go:117] "RemoveContainer" containerID="47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.785930 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k86dh"] Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.796482 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k86dh"] Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.800282 5029 scope.go:117] "RemoveContainer" containerID="83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.859645 5029 scope.go:117] "RemoveContainer" containerID="0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966" Mar 13 21:27:01 crc kubenswrapper[5029]: E0313 21:27:01.861272 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966\": container with ID starting with 0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966 not found: ID does not exist" containerID="0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.861318 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966"} err="failed to get container status \"0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966\": rpc error: code = NotFound desc = could not find container \"0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966\": container with ID starting with 0431e9c3113d1b947019e20e194e7e4139e22edda0768e68229a1d0194d9e966 not found: ID does not exist" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.861351 5029 scope.go:117] "RemoveContainer" containerID="47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f" Mar 13 21:27:01 crc kubenswrapper[5029]: E0313 21:27:01.861663 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f\": container with ID starting with 47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f not found: ID does not exist" containerID="47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.861698 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f"} err="failed to get container status \"47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f\": rpc error: code = NotFound desc = could not find container \"47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f\": container with ID starting with 47b2672784dcda64d36580fb4567851e2f915276e8723188683191d2f1956f4f not found: ID does not exist" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.861715 5029 scope.go:117] "RemoveContainer" containerID="83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f" Mar 13 21:27:01 crc kubenswrapper[5029]: E0313 21:27:01.861967 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f\": container with ID starting with 83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f not found: ID does not exist" containerID="83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f" Mar 13 21:27:01 crc kubenswrapper[5029]: I0313 21:27:01.861996 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f"} err="failed to get container status \"83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f\": rpc error: code = NotFound desc = could not find container \"83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f\": container with ID starting with 83dbd3182a53a4537c7d35f59a0573a25f035bb5d45c3f8bb783252c283ff71f not found: ID does not exist" Mar 13 21:27:02 crc kubenswrapper[5029]: I0313 21:27:02.611159 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" path="/var/lib/kubelet/pods/4ea33e83-b688-45ab-9e06-565d288e628d/volumes" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.156747 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557288-8mh4x"] Mar 13 21:28:00 crc kubenswrapper[5029]: E0313 21:28:00.157920 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" containerName="extract-content" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.157938 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" containerName="extract-content" Mar 13 21:28:00 crc kubenswrapper[5029]: E0313 21:28:00.157974 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" containerName="registry-server" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.157981 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" containerName="registry-server" Mar 13 21:28:00 crc kubenswrapper[5029]: E0313 21:28:00.157994 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" containerName="extract-utilities" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.158007 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" containerName="extract-utilities" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.158276 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea33e83-b688-45ab-9e06-565d288e628d" containerName="registry-server" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.159136 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557288-8mh4x" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.162001 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.162262 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.162329 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.182428 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557288-8mh4x"] Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.263877 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtkn\" (UniqueName: \"kubernetes.io/projected/160ab6e6-8c27-4dad-8611-1836d3c189d7-kube-api-access-6dtkn\") pod \"auto-csr-approver-29557288-8mh4x\" (UID: \"160ab6e6-8c27-4dad-8611-1836d3c189d7\") " pod="openshift-infra/auto-csr-approver-29557288-8mh4x" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.366047 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dtkn\" (UniqueName: \"kubernetes.io/projected/160ab6e6-8c27-4dad-8611-1836d3c189d7-kube-api-access-6dtkn\") pod \"auto-csr-approver-29557288-8mh4x\" (UID: \"160ab6e6-8c27-4dad-8611-1836d3c189d7\") " pod="openshift-infra/auto-csr-approver-29557288-8mh4x" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.389355 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dtkn\" (UniqueName: \"kubernetes.io/projected/160ab6e6-8c27-4dad-8611-1836d3c189d7-kube-api-access-6dtkn\") pod \"auto-csr-approver-29557288-8mh4x\" (UID: \"160ab6e6-8c27-4dad-8611-1836d3c189d7\") " pod="openshift-infra/auto-csr-approver-29557288-8mh4x" Mar 13 21:28:00 crc kubenswrapper[5029]: I0313 21:28:00.482933 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557288-8mh4x" Mar 13 21:28:01 crc kubenswrapper[5029]: I0313 21:28:01.219457 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557288-8mh4x"] Mar 13 21:28:01 crc kubenswrapper[5029]: I0313 21:28:01.253626 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:28:01 crc kubenswrapper[5029]: I0313 21:28:01.312325 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557288-8mh4x" event={"ID":"160ab6e6-8c27-4dad-8611-1836d3c189d7","Type":"ContainerStarted","Data":"15f7ec1f8f5c53b5d1c9bc206718d33f0cf400aa6260cae66f24f405270ec7f5"} Mar 13 21:28:03 crc kubenswrapper[5029]: I0313 21:28:03.345168 5029 generic.go:334] "Generic (PLEG): container finished" podID="160ab6e6-8c27-4dad-8611-1836d3c189d7" containerID="14a7e010cd805d47b140dd182487aef6891acd4a64a7831a2ddb7b4485943351" exitCode=0 Mar 13 21:28:03 crc kubenswrapper[5029]: I0313 21:28:03.345466 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557288-8mh4x" event={"ID":"160ab6e6-8c27-4dad-8611-1836d3c189d7","Type":"ContainerDied","Data":"14a7e010cd805d47b140dd182487aef6891acd4a64a7831a2ddb7b4485943351"} Mar 13 21:28:04 crc kubenswrapper[5029]: I0313 21:28:04.982726 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557288-8mh4x" Mar 13 21:28:05 crc kubenswrapper[5029]: I0313 21:28:05.096293 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dtkn\" (UniqueName: \"kubernetes.io/projected/160ab6e6-8c27-4dad-8611-1836d3c189d7-kube-api-access-6dtkn\") pod \"160ab6e6-8c27-4dad-8611-1836d3c189d7\" (UID: \"160ab6e6-8c27-4dad-8611-1836d3c189d7\") " Mar 13 21:28:05 crc kubenswrapper[5029]: I0313 21:28:05.112350 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160ab6e6-8c27-4dad-8611-1836d3c189d7-kube-api-access-6dtkn" (OuterVolumeSpecName: "kube-api-access-6dtkn") pod "160ab6e6-8c27-4dad-8611-1836d3c189d7" (UID: "160ab6e6-8c27-4dad-8611-1836d3c189d7"). InnerVolumeSpecName "kube-api-access-6dtkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:28:05 crc kubenswrapper[5029]: I0313 21:28:05.199784 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dtkn\" (UniqueName: \"kubernetes.io/projected/160ab6e6-8c27-4dad-8611-1836d3c189d7-kube-api-access-6dtkn\") on node \"crc\" DevicePath \"\"" Mar 13 21:28:05 crc kubenswrapper[5029]: I0313 21:28:05.367421 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557288-8mh4x" event={"ID":"160ab6e6-8c27-4dad-8611-1836d3c189d7","Type":"ContainerDied","Data":"15f7ec1f8f5c53b5d1c9bc206718d33f0cf400aa6260cae66f24f405270ec7f5"} Mar 13 21:28:05 crc kubenswrapper[5029]: I0313 21:28:05.367488 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557288-8mh4x" Mar 13 21:28:05 crc kubenswrapper[5029]: I0313 21:28:05.367489 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f7ec1f8f5c53b5d1c9bc206718d33f0cf400aa6260cae66f24f405270ec7f5" Mar 13 21:28:06 crc kubenswrapper[5029]: I0313 21:28:06.125623 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557282-qm888"] Mar 13 21:28:06 crc kubenswrapper[5029]: I0313 21:28:06.139238 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557282-qm888"] Mar 13 21:28:06 crc kubenswrapper[5029]: I0313 21:28:06.610439 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dcee36b-c758-47ab-9e5e-5e8964ea5bdf" path="/var/lib/kubelet/pods/7dcee36b-c758-47ab-9e5e-5e8964ea5bdf/volumes" Mar 13 21:28:13 crc kubenswrapper[5029]: I0313 21:28:13.950708 5029 scope.go:117] "RemoveContainer" containerID="621f324f272f5c956a96f5b9de6d6216222f319fc2849ac2a1b5107da59c3731" Mar 13 21:28:31 crc kubenswrapper[5029]: I0313 21:28:31.950872 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:28:31 crc kubenswrapper[5029]: I0313 21:28:31.951873 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:29:01 crc kubenswrapper[5029]: I0313 21:29:01.950662 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:29:01 crc kubenswrapper[5029]: I0313 21:29:01.951246 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:29:31 crc kubenswrapper[5029]: I0313 21:29:31.950150 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:29:31 crc kubenswrapper[5029]: I0313 21:29:31.950734 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:29:31 crc kubenswrapper[5029]: I0313 21:29:31.950781 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:29:31 crc kubenswrapper[5029]: I0313 21:29:31.951691 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98f20ab9a14ee8f5298136685ff74193f1b99413c23d8f965910713c88de0c7f"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:29:31 crc kubenswrapper[5029]: I0313 21:29:31.951746 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://98f20ab9a14ee8f5298136685ff74193f1b99413c23d8f965910713c88de0c7f" gracePeriod=600 Mar 13 21:29:32 crc kubenswrapper[5029]: I0313 21:29:32.263300 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="98f20ab9a14ee8f5298136685ff74193f1b99413c23d8f965910713c88de0c7f" exitCode=0 Mar 13 21:29:32 crc kubenswrapper[5029]: I0313 21:29:32.263556 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"98f20ab9a14ee8f5298136685ff74193f1b99413c23d8f965910713c88de0c7f"} Mar 13 21:29:32 crc kubenswrapper[5029]: I0313 21:29:32.263752 5029 scope.go:117] "RemoveContainer" containerID="49adf9557c59b9cdbcce97e7e6714041bf403b7e3e721a387e12da3e633cdf8a" Mar 13 21:29:33 crc kubenswrapper[5029]: I0313 21:29:33.274517 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068"} Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.174106 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w"] Mar 13 21:30:00 crc kubenswrapper[5029]: E0313 21:30:00.175705 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160ab6e6-8c27-4dad-8611-1836d3c189d7" containerName="oc" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.175726 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="160ab6e6-8c27-4dad-8611-1836d3c189d7" containerName="oc" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.175960 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="160ab6e6-8c27-4dad-8611-1836d3c189d7" containerName="oc" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.176840 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.179805 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.181171 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.189181 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557290-cdvm9"] Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.191492 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557290-cdvm9" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.194445 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.194750 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.194972 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.206134 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3e01b40-d33f-4e22-b207-80ab52b83f84-secret-volume\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.206738 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3e01b40-d33f-4e22-b207-80ab52b83f84-config-volume\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.207262 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pfvf\" (UniqueName: \"kubernetes.io/projected/e3e01b40-d33f-4e22-b207-80ab52b83f84-kube-api-access-4pfvf\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.208472 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzxtt\" (UniqueName: \"kubernetes.io/projected/ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe-kube-api-access-xzxtt\") pod \"auto-csr-approver-29557290-cdvm9\" (UID: \"ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe\") " pod="openshift-infra/auto-csr-approver-29557290-cdvm9" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.210334 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557290-cdvm9"] Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.234839 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w"] Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.310775 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pfvf\" (UniqueName: \"kubernetes.io/projected/e3e01b40-d33f-4e22-b207-80ab52b83f84-kube-api-access-4pfvf\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.310963 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzxtt\" (UniqueName: \"kubernetes.io/projected/ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe-kube-api-access-xzxtt\") pod \"auto-csr-approver-29557290-cdvm9\" (UID: \"ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe\") " pod="openshift-infra/auto-csr-approver-29557290-cdvm9" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.311017 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3e01b40-d33f-4e22-b207-80ab52b83f84-secret-volume\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.311122 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3e01b40-d33f-4e22-b207-80ab52b83f84-config-volume\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.312191 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3e01b40-d33f-4e22-b207-80ab52b83f84-config-volume\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.317789 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3e01b40-d33f-4e22-b207-80ab52b83f84-secret-volume\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.329978 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzxtt\" (UniqueName: \"kubernetes.io/projected/ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe-kube-api-access-xzxtt\") pod \"auto-csr-approver-29557290-cdvm9\" (UID: \"ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe\") " pod="openshift-infra/auto-csr-approver-29557290-cdvm9" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.331222 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pfvf\" (UniqueName: \"kubernetes.io/projected/e3e01b40-d33f-4e22-b207-80ab52b83f84-kube-api-access-4pfvf\") pod \"collect-profiles-29557290-hbc9w\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.505714 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:00 crc kubenswrapper[5029]: I0313 21:30:00.529934 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557290-cdvm9" Mar 13 21:30:01 crc kubenswrapper[5029]: I0313 21:30:01.118666 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w"] Mar 13 21:30:01 crc kubenswrapper[5029]: I0313 21:30:01.291550 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557290-cdvm9"] Mar 13 21:30:01 crc kubenswrapper[5029]: W0313 21:30:01.300074 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecde9149_9e0f_4a1d_bab2_b1196ff5c9fe.slice/crio-007a9b623ff1f8c93a77eb1c22593547c1be61e20b686f235ce940ec68a49aec WatchSource:0}: Error finding container 007a9b623ff1f8c93a77eb1c22593547c1be61e20b686f235ce940ec68a49aec: Status 404 returned error can't find the container with id 007a9b623ff1f8c93a77eb1c22593547c1be61e20b686f235ce940ec68a49aec Mar 13 21:30:01 crc kubenswrapper[5029]: I0313 21:30:01.560257 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" event={"ID":"e3e01b40-d33f-4e22-b207-80ab52b83f84","Type":"ContainerStarted","Data":"36e7cac32fb47d630af3b1f4864e4e0532766b7ebbf24275da45b8b8b33d5f63"} Mar 13 21:30:01 crc kubenswrapper[5029]: I0313 21:30:01.560620 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" event={"ID":"e3e01b40-d33f-4e22-b207-80ab52b83f84","Type":"ContainerStarted","Data":"1ed16f28f5784965894589e4e3212b48558dfe18b8914a262d8926eb2e5d81be"} Mar 13 21:30:01 crc kubenswrapper[5029]: I0313 21:30:01.562093 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557290-cdvm9" event={"ID":"ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe","Type":"ContainerStarted","Data":"007a9b623ff1f8c93a77eb1c22593547c1be61e20b686f235ce940ec68a49aec"} Mar 13 21:30:01 crc kubenswrapper[5029]: I0313 21:30:01.579050 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" podStartSLOduration=1.579021244 podStartE2EDuration="1.579021244s" podCreationTimestamp="2026-03-13 21:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 21:30:01.57704315 +0000 UTC m=+3761.593125553" watchObservedRunningTime="2026-03-13 21:30:01.579021244 +0000 UTC m=+3761.595103647" Mar 13 21:30:02 crc kubenswrapper[5029]: I0313 21:30:02.576536 5029 generic.go:334] "Generic (PLEG): container finished" podID="e3e01b40-d33f-4e22-b207-80ab52b83f84" containerID="36e7cac32fb47d630af3b1f4864e4e0532766b7ebbf24275da45b8b8b33d5f63" exitCode=0 Mar 13 21:30:02 crc kubenswrapper[5029]: I0313 21:30:02.576630 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" event={"ID":"e3e01b40-d33f-4e22-b207-80ab52b83f84","Type":"ContainerDied","Data":"36e7cac32fb47d630af3b1f4864e4e0532766b7ebbf24275da45b8b8b33d5f63"} Mar 13 21:30:03 crc kubenswrapper[5029]: I0313 21:30:03.593152 5029 generic.go:334] "Generic (PLEG): container finished" podID="ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe" containerID="0bc74a3f316ff7b85e92e40dae860cd103b2fef3d06b46bf5cde611dc38d0148" exitCode=0 Mar 13 21:30:03 crc kubenswrapper[5029]: I0313 21:30:03.593976 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557290-cdvm9" event={"ID":"ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe","Type":"ContainerDied","Data":"0bc74a3f316ff7b85e92e40dae860cd103b2fef3d06b46bf5cde611dc38d0148"} Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.204900 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.229004 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3e01b40-d33f-4e22-b207-80ab52b83f84-secret-volume\") pod \"e3e01b40-d33f-4e22-b207-80ab52b83f84\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.229110 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3e01b40-d33f-4e22-b207-80ab52b83f84-config-volume\") pod \"e3e01b40-d33f-4e22-b207-80ab52b83f84\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.229237 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pfvf\" (UniqueName: \"kubernetes.io/projected/e3e01b40-d33f-4e22-b207-80ab52b83f84-kube-api-access-4pfvf\") pod \"e3e01b40-d33f-4e22-b207-80ab52b83f84\" (UID: \"e3e01b40-d33f-4e22-b207-80ab52b83f84\") " Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.232043 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3e01b40-d33f-4e22-b207-80ab52b83f84-config-volume" (OuterVolumeSpecName: "config-volume") pod "e3e01b40-d33f-4e22-b207-80ab52b83f84" (UID: "e3e01b40-d33f-4e22-b207-80ab52b83f84"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.240620 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e01b40-d33f-4e22-b207-80ab52b83f84-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e3e01b40-d33f-4e22-b207-80ab52b83f84" (UID: "e3e01b40-d33f-4e22-b207-80ab52b83f84"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.248110 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e01b40-d33f-4e22-b207-80ab52b83f84-kube-api-access-4pfvf" (OuterVolumeSpecName: "kube-api-access-4pfvf") pod "e3e01b40-d33f-4e22-b207-80ab52b83f84" (UID: "e3e01b40-d33f-4e22-b207-80ab52b83f84"). InnerVolumeSpecName "kube-api-access-4pfvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.334239 5029 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3e01b40-d33f-4e22-b207-80ab52b83f84-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.334294 5029 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3e01b40-d33f-4e22-b207-80ab52b83f84-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.334314 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pfvf\" (UniqueName: \"kubernetes.io/projected/e3e01b40-d33f-4e22-b207-80ab52b83f84-kube-api-access-4pfvf\") on node \"crc\" DevicePath \"\"" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.644066 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.647617 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557290-hbc9w" event={"ID":"e3e01b40-d33f-4e22-b207-80ab52b83f84","Type":"ContainerDied","Data":"1ed16f28f5784965894589e4e3212b48558dfe18b8914a262d8926eb2e5d81be"} Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.647673 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed16f28f5784965894589e4e3212b48558dfe18b8914a262d8926eb2e5d81be" Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.672397 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9"] Mar 13 21:30:04 crc kubenswrapper[5029]: I0313 21:30:04.682211 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557245-tqct9"] Mar 13 21:30:05 crc kubenswrapper[5029]: I0313 21:30:05.365607 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557290-cdvm9" Mar 13 21:30:05 crc kubenswrapper[5029]: I0313 21:30:05.459677 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzxtt\" (UniqueName: \"kubernetes.io/projected/ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe-kube-api-access-xzxtt\") pod \"ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe\" (UID: \"ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe\") " Mar 13 21:30:05 crc kubenswrapper[5029]: I0313 21:30:05.471605 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe-kube-api-access-xzxtt" (OuterVolumeSpecName: "kube-api-access-xzxtt") pod "ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe" (UID: "ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe"). InnerVolumeSpecName "kube-api-access-xzxtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:30:05 crc kubenswrapper[5029]: I0313 21:30:05.563601 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzxtt\" (UniqueName: \"kubernetes.io/projected/ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe-kube-api-access-xzxtt\") on node \"crc\" DevicePath \"\"" Mar 13 21:30:05 crc kubenswrapper[5029]: I0313 21:30:05.656741 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557290-cdvm9" event={"ID":"ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe","Type":"ContainerDied","Data":"007a9b623ff1f8c93a77eb1c22593547c1be61e20b686f235ce940ec68a49aec"} Mar 13 21:30:05 crc kubenswrapper[5029]: I0313 21:30:05.656796 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="007a9b623ff1f8c93a77eb1c22593547c1be61e20b686f235ce940ec68a49aec" Mar 13 21:30:05 crc kubenswrapper[5029]: I0313 21:30:05.656889 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557290-cdvm9" Mar 13 21:30:06 crc kubenswrapper[5029]: I0313 21:30:06.426714 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557284-w9ptx"] Mar 13 21:30:06 crc kubenswrapper[5029]: I0313 21:30:06.436688 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557284-w9ptx"] Mar 13 21:30:06 crc kubenswrapper[5029]: I0313 21:30:06.612926 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306bd8a5-0ed2-4533-98db-9b69bfed7710" path="/var/lib/kubelet/pods/306bd8a5-0ed2-4533-98db-9b69bfed7710/volumes" Mar 13 21:30:06 crc kubenswrapper[5029]: I0313 21:30:06.616244 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098" path="/var/lib/kubelet/pods/b49d19dc-4d6e-4c88-8abb-9a5e3d4ce098/volumes" Mar 13 21:30:14 crc kubenswrapper[5029]: I0313 21:30:14.077142 5029 scope.go:117] "RemoveContainer" containerID="e6885b9648e6492fb5950a6562c7f58f4243dd3e8fa4620eb3a6aa097f6375b2" Mar 13 21:30:14 crc kubenswrapper[5029]: I0313 21:30:14.120514 5029 scope.go:117] "RemoveContainer" containerID="8c29d78cc650b5ae4a45123b48f1d7f31c44783669965a1702475ea08048fcf5" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.164837 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557292-wxrv4"] Mar 13 21:32:00 crc kubenswrapper[5029]: E0313 21:32:00.167070 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e01b40-d33f-4e22-b207-80ab52b83f84" containerName="collect-profiles" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.167093 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e01b40-d33f-4e22-b207-80ab52b83f84" containerName="collect-profiles" Mar 13 21:32:00 crc kubenswrapper[5029]: E0313 21:32:00.167136 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe" containerName="oc" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.167146 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe" containerName="oc" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.167352 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e01b40-d33f-4e22-b207-80ab52b83f84" containerName="collect-profiles" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.167369 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe" containerName="oc" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.168313 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557292-wxrv4" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.172401 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.172594 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.172722 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.182363 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557292-wxrv4"] Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.346590 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vclh\" (UniqueName: \"kubernetes.io/projected/e9cdcb3c-ecdb-4b28-977d-cb95867968b9-kube-api-access-4vclh\") pod \"auto-csr-approver-29557292-wxrv4\" (UID: \"e9cdcb3c-ecdb-4b28-977d-cb95867968b9\") " pod="openshift-infra/auto-csr-approver-29557292-wxrv4" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.449152 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vclh\" (UniqueName: \"kubernetes.io/projected/e9cdcb3c-ecdb-4b28-977d-cb95867968b9-kube-api-access-4vclh\") pod \"auto-csr-approver-29557292-wxrv4\" (UID: \"e9cdcb3c-ecdb-4b28-977d-cb95867968b9\") " pod="openshift-infra/auto-csr-approver-29557292-wxrv4" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.675516 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vclh\" (UniqueName: \"kubernetes.io/projected/e9cdcb3c-ecdb-4b28-977d-cb95867968b9-kube-api-access-4vclh\") pod \"auto-csr-approver-29557292-wxrv4\" (UID: \"e9cdcb3c-ecdb-4b28-977d-cb95867968b9\") " pod="openshift-infra/auto-csr-approver-29557292-wxrv4" Mar 13 21:32:00 crc kubenswrapper[5029]: I0313 21:32:00.830648 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557292-wxrv4" Mar 13 21:32:01 crc kubenswrapper[5029]: I0313 21:32:01.359600 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557292-wxrv4"] Mar 13 21:32:01 crc kubenswrapper[5029]: I0313 21:32:01.949883 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:32:01 crc kubenswrapper[5029]: I0313 21:32:01.949951 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:32:02 crc kubenswrapper[5029]: I0313 21:32:02.278798 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557292-wxrv4" event={"ID":"e9cdcb3c-ecdb-4b28-977d-cb95867968b9","Type":"ContainerStarted","Data":"e382cfb35d2b4cff0320b87fa6a6b1cb08ee5c11d982a72b166b9f542f3e4b20"} Mar 13 21:32:03 crc kubenswrapper[5029]: I0313 21:32:03.292399 5029 generic.go:334] "Generic (PLEG): container finished" podID="e9cdcb3c-ecdb-4b28-977d-cb95867968b9" containerID="12a5cb04f3a8200305a7b0a7611cc80080f4126c72698ddd755cb37fe2dbdf0f" exitCode=0 Mar 13 21:32:03 crc kubenswrapper[5029]: I0313 21:32:03.292945 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557292-wxrv4" event={"ID":"e9cdcb3c-ecdb-4b28-977d-cb95867968b9","Type":"ContainerDied","Data":"12a5cb04f3a8200305a7b0a7611cc80080f4126c72698ddd755cb37fe2dbdf0f"} Mar 13 21:32:05 crc kubenswrapper[5029]: I0313 21:32:05.045576 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557292-wxrv4" Mar 13 21:32:05 crc kubenswrapper[5029]: I0313 21:32:05.086352 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vclh\" (UniqueName: \"kubernetes.io/projected/e9cdcb3c-ecdb-4b28-977d-cb95867968b9-kube-api-access-4vclh\") pod \"e9cdcb3c-ecdb-4b28-977d-cb95867968b9\" (UID: \"e9cdcb3c-ecdb-4b28-977d-cb95867968b9\") " Mar 13 21:32:05 crc kubenswrapper[5029]: I0313 21:32:05.101634 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cdcb3c-ecdb-4b28-977d-cb95867968b9-kube-api-access-4vclh" (OuterVolumeSpecName: "kube-api-access-4vclh") pod "e9cdcb3c-ecdb-4b28-977d-cb95867968b9" (UID: "e9cdcb3c-ecdb-4b28-977d-cb95867968b9"). InnerVolumeSpecName "kube-api-access-4vclh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:32:05 crc kubenswrapper[5029]: I0313 21:32:05.190160 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vclh\" (UniqueName: \"kubernetes.io/projected/e9cdcb3c-ecdb-4b28-977d-cb95867968b9-kube-api-access-4vclh\") on node \"crc\" DevicePath \"\"" Mar 13 21:32:05 crc kubenswrapper[5029]: I0313 21:32:05.318460 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557292-wxrv4" event={"ID":"e9cdcb3c-ecdb-4b28-977d-cb95867968b9","Type":"ContainerDied","Data":"e382cfb35d2b4cff0320b87fa6a6b1cb08ee5c11d982a72b166b9f542f3e4b20"} Mar 13 21:32:05 crc kubenswrapper[5029]: I0313 21:32:05.318508 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e382cfb35d2b4cff0320b87fa6a6b1cb08ee5c11d982a72b166b9f542f3e4b20" Mar 13 21:32:05 crc kubenswrapper[5029]: I0313 21:32:05.318582 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557292-wxrv4" Mar 13 21:32:06 crc kubenswrapper[5029]: I0313 21:32:06.137553 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557286-wqb9q"] Mar 13 21:32:06 crc kubenswrapper[5029]: I0313 21:32:06.149014 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557286-wqb9q"] Mar 13 21:32:06 crc kubenswrapper[5029]: I0313 21:32:06.614699 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abee0019-846f-4623-8e18-bafad404fb33" path="/var/lib/kubelet/pods/abee0019-846f-4623-8e18-bafad404fb33/volumes" Mar 13 21:32:14 crc kubenswrapper[5029]: I0313 21:32:14.300985 5029 scope.go:117] "RemoveContainer" containerID="1611f3f281832004d5bd5ccaee512e8349701d201977979a47e7096f8a31814d" Mar 13 21:32:31 crc kubenswrapper[5029]: I0313 21:32:31.950111 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:32:31 crc kubenswrapper[5029]: I0313 21:32:31.950729 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:33:01 crc kubenswrapper[5029]: I0313 21:33:01.950228 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:33:01 crc kubenswrapper[5029]: I0313 21:33:01.952254 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:33:01 crc kubenswrapper[5029]: I0313 21:33:01.952428 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:33:01 crc kubenswrapper[5029]: I0313 21:33:01.953610 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:33:01 crc kubenswrapper[5029]: I0313 21:33:01.953784 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" gracePeriod=600 Mar 13 21:33:02 crc kubenswrapper[5029]: E0313 21:33:02.082124 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:33:03 crc kubenswrapper[5029]: I0313 21:33:03.004446 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" exitCode=0 Mar 13 21:33:03 crc kubenswrapper[5029]: I0313 21:33:03.004510 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068"} Mar 13 21:33:03 crc kubenswrapper[5029]: I0313 21:33:03.005979 5029 scope.go:117] "RemoveContainer" containerID="98f20ab9a14ee8f5298136685ff74193f1b99413c23d8f965910713c88de0c7f" Mar 13 21:33:03 crc kubenswrapper[5029]: I0313 21:33:03.007226 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:33:03 crc kubenswrapper[5029]: E0313 21:33:03.007678 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:33:17 crc kubenswrapper[5029]: I0313 21:33:17.600722 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:33:17 crc kubenswrapper[5029]: E0313 21:33:17.601570 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:33:29 crc kubenswrapper[5029]: I0313 21:33:29.600353 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:33:29 crc kubenswrapper[5029]: E0313 21:33:29.601708 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.515054 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ctvn9"] Mar 13 21:33:35 crc kubenswrapper[5029]: E0313 21:33:35.516660 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9cdcb3c-ecdb-4b28-977d-cb95867968b9" containerName="oc" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.516678 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9cdcb3c-ecdb-4b28-977d-cb95867968b9" containerName="oc" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.516950 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9cdcb3c-ecdb-4b28-977d-cb95867968b9" containerName="oc" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.525493 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.536767 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ctvn9"] Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.632433 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-catalog-content\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.632502 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nk4c\" (UniqueName: \"kubernetes.io/projected/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-kube-api-access-6nk4c\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.632537 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-utilities\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.736571 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-catalog-content\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.737023 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nk4c\" (UniqueName: \"kubernetes.io/projected/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-kube-api-access-6nk4c\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.737072 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-utilities\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.737505 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-catalog-content\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.737575 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-utilities\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.765332 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nk4c\" (UniqueName: \"kubernetes.io/projected/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-kube-api-access-6nk4c\") pod \"certified-operators-ctvn9\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:35 crc kubenswrapper[5029]: I0313 21:33:35.865405 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:36 crc kubenswrapper[5029]: I0313 21:33:36.577787 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ctvn9"] Mar 13 21:33:37 crc kubenswrapper[5029]: I0313 21:33:37.384362 5029 generic.go:334] "Generic (PLEG): container finished" podID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerID="b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e" exitCode=0 Mar 13 21:33:37 crc kubenswrapper[5029]: I0313 21:33:37.384454 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctvn9" event={"ID":"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820","Type":"ContainerDied","Data":"b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e"} Mar 13 21:33:37 crc kubenswrapper[5029]: I0313 21:33:37.385117 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctvn9" event={"ID":"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820","Type":"ContainerStarted","Data":"ad1a9f7670ef9c20d38c341ef43338f56f84dfc04a68224d8220859d31825fdd"} Mar 13 21:33:37 crc kubenswrapper[5029]: I0313 21:33:37.387912 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:33:39 crc kubenswrapper[5029]: I0313 21:33:39.419210 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctvn9" event={"ID":"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820","Type":"ContainerStarted","Data":"0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75"} Mar 13 21:33:41 crc kubenswrapper[5029]: I0313 21:33:41.444818 5029 generic.go:334] "Generic (PLEG): container finished" podID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerID="0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75" exitCode=0 Mar 13 21:33:41 crc kubenswrapper[5029]: I0313 21:33:41.445071 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctvn9" event={"ID":"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820","Type":"ContainerDied","Data":"0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75"} Mar 13 21:33:41 crc kubenswrapper[5029]: I0313 21:33:41.600176 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:33:41 crc kubenswrapper[5029]: E0313 21:33:41.600601 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:33:42 crc kubenswrapper[5029]: I0313 21:33:42.458689 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctvn9" event={"ID":"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820","Type":"ContainerStarted","Data":"22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7"} Mar 13 21:33:42 crc kubenswrapper[5029]: I0313 21:33:42.488385 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ctvn9" podStartSLOduration=2.783712981 podStartE2EDuration="7.488355792s" podCreationTimestamp="2026-03-13 21:33:35 +0000 UTC" firstStartedPulling="2026-03-13 21:33:37.38761289 +0000 UTC m=+3977.403695293" lastFinishedPulling="2026-03-13 21:33:42.092255701 +0000 UTC m=+3982.108338104" observedRunningTime="2026-03-13 21:33:42.479370106 +0000 UTC m=+3982.495452519" watchObservedRunningTime="2026-03-13 21:33:42.488355792 +0000 UTC m=+3982.504438195" Mar 13 21:33:45 crc kubenswrapper[5029]: I0313 21:33:45.866543 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:45 crc kubenswrapper[5029]: I0313 21:33:45.867530 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:45 crc kubenswrapper[5029]: I0313 21:33:45.928939 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:55 crc kubenswrapper[5029]: I0313 21:33:55.599630 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:33:55 crc kubenswrapper[5029]: E0313 21:33:55.601014 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:33:55 crc kubenswrapper[5029]: I0313 21:33:55.926774 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:55 crc kubenswrapper[5029]: I0313 21:33:55.988568 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ctvn9"] Mar 13 21:33:56 crc kubenswrapper[5029]: I0313 21:33:56.624489 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ctvn9" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerName="registry-server" containerID="cri-o://22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7" gracePeriod=2 Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.440373 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.569590 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-catalog-content\") pod \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.570445 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nk4c\" (UniqueName: \"kubernetes.io/projected/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-kube-api-access-6nk4c\") pod \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.570483 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-utilities\") pod \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\" (UID: \"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820\") " Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.571639 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-utilities" (OuterVolumeSpecName: "utilities") pod "d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" (UID: "d6e04b7e-cebd-49e5-b4e6-6b52d83ba820"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.587137 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-kube-api-access-6nk4c" (OuterVolumeSpecName: "kube-api-access-6nk4c") pod "d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" (UID: "d6e04b7e-cebd-49e5-b4e6-6b52d83ba820"). InnerVolumeSpecName "kube-api-access-6nk4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.639873 5029 generic.go:334] "Generic (PLEG): container finished" podID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerID="22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7" exitCode=0 Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.639919 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctvn9" event={"ID":"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820","Type":"ContainerDied","Data":"22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7"} Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.639948 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctvn9" event={"ID":"d6e04b7e-cebd-49e5-b4e6-6b52d83ba820","Type":"ContainerDied","Data":"ad1a9f7670ef9c20d38c341ef43338f56f84dfc04a68224d8220859d31825fdd"} Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.639965 5029 scope.go:117] "RemoveContainer" containerID="22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.640086 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctvn9" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.641339 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" (UID: "d6e04b7e-cebd-49e5-b4e6-6b52d83ba820"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.670017 5029 scope.go:117] "RemoveContainer" containerID="0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.672981 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nk4c\" (UniqueName: \"kubernetes.io/projected/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-kube-api-access-6nk4c\") on node \"crc\" DevicePath \"\"" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.673032 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.673044 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.717265 5029 scope.go:117] "RemoveContainer" containerID="b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.768747 5029 scope.go:117] "RemoveContainer" containerID="22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7" Mar 13 21:33:57 crc kubenswrapper[5029]: E0313 21:33:57.782638 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7\": container with ID starting with 22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7 not found: ID does not exist" containerID="22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.782985 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7"} err="failed to get container status \"22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7\": rpc error: code = NotFound desc = could not find container \"22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7\": container with ID starting with 22aede7840bb0f079c6cfcdd6470cf6aa5451fb65b85941367942b24b52b7aa7 not found: ID does not exist" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.783178 5029 scope.go:117] "RemoveContainer" containerID="0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75" Mar 13 21:33:57 crc kubenswrapper[5029]: E0313 21:33:57.785607 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75\": container with ID starting with 0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75 not found: ID does not exist" containerID="0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.785669 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75"} err="failed to get container status \"0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75\": rpc error: code = NotFound desc = could not find container \"0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75\": container with ID starting with 0724973b6253f63d75f42a807e31e47f515e8d6775282a91664584e5d71c4a75 not found: ID does not exist" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.785709 5029 scope.go:117] "RemoveContainer" containerID="b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e" Mar 13 21:33:57 crc kubenswrapper[5029]: E0313 21:33:57.786333 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e\": container with ID starting with b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e not found: ID does not exist" containerID="b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.786373 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e"} err="failed to get container status \"b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e\": rpc error: code = NotFound desc = could not find container \"b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e\": container with ID starting with b37a52e7cdbf34baf68f1547f919e8d94bfef0a43b3b213c5980466e78cc883e not found: ID does not exist" Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.981713 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ctvn9"] Mar 13 21:33:57 crc kubenswrapper[5029]: I0313 21:33:57.994087 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ctvn9"] Mar 13 21:33:58 crc kubenswrapper[5029]: I0313 21:33:58.613086 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" path="/var/lib/kubelet/pods/d6e04b7e-cebd-49e5-b4e6-6b52d83ba820/volumes" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.156578 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557294-jv7l6"] Mar 13 21:34:00 crc kubenswrapper[5029]: E0313 21:34:00.157069 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerName="extract-content" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.157087 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerName="extract-content" Mar 13 21:34:00 crc kubenswrapper[5029]: E0313 21:34:00.157126 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerName="extract-utilities" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.157135 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerName="extract-utilities" Mar 13 21:34:00 crc kubenswrapper[5029]: E0313 21:34:00.157154 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerName="registry-server" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.157164 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerName="registry-server" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.157408 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6e04b7e-cebd-49e5-b4e6-6b52d83ba820" containerName="registry-server" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.158132 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557294-jv7l6" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.162123 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.166485 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.171468 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.184581 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557294-jv7l6"] Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.245907 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2j8\" (UniqueName: \"kubernetes.io/projected/77743f8d-0b5a-4341-b6a0-51b2b7b72bc3-kube-api-access-mn2j8\") pod \"auto-csr-approver-29557294-jv7l6\" (UID: \"77743f8d-0b5a-4341-b6a0-51b2b7b72bc3\") " pod="openshift-infra/auto-csr-approver-29557294-jv7l6" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.349370 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn2j8\" (UniqueName: \"kubernetes.io/projected/77743f8d-0b5a-4341-b6a0-51b2b7b72bc3-kube-api-access-mn2j8\") pod \"auto-csr-approver-29557294-jv7l6\" (UID: \"77743f8d-0b5a-4341-b6a0-51b2b7b72bc3\") " pod="openshift-infra/auto-csr-approver-29557294-jv7l6" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.386766 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn2j8\" (UniqueName: \"kubernetes.io/projected/77743f8d-0b5a-4341-b6a0-51b2b7b72bc3-kube-api-access-mn2j8\") pod \"auto-csr-approver-29557294-jv7l6\" (UID: \"77743f8d-0b5a-4341-b6a0-51b2b7b72bc3\") " pod="openshift-infra/auto-csr-approver-29557294-jv7l6" Mar 13 21:34:00 crc kubenswrapper[5029]: I0313 21:34:00.484373 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557294-jv7l6" Mar 13 21:34:01 crc kubenswrapper[5029]: I0313 21:34:01.102453 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557294-jv7l6"] Mar 13 21:34:01 crc kubenswrapper[5029]: I0313 21:34:01.929058 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557294-jv7l6" event={"ID":"77743f8d-0b5a-4341-b6a0-51b2b7b72bc3","Type":"ContainerStarted","Data":"2c357d4c8c28671543a07da91b9f6335768de63e3078f484686baf4a95588e1e"} Mar 13 21:34:02 crc kubenswrapper[5029]: I0313 21:34:02.945875 5029 generic.go:334] "Generic (PLEG): container finished" podID="77743f8d-0b5a-4341-b6a0-51b2b7b72bc3" containerID="42750c706dfa3dc5522ac0cba5f236fde4489739c1ff1803cbaa58bb708542be" exitCode=0 Mar 13 21:34:02 crc kubenswrapper[5029]: I0313 21:34:02.945957 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557294-jv7l6" event={"ID":"77743f8d-0b5a-4341-b6a0-51b2b7b72bc3","Type":"ContainerDied","Data":"42750c706dfa3dc5522ac0cba5f236fde4489739c1ff1803cbaa58bb708542be"} Mar 13 21:34:04 crc kubenswrapper[5029]: I0313 21:34:04.885751 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557294-jv7l6" Mar 13 21:34:04 crc kubenswrapper[5029]: I0313 21:34:04.970870 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557294-jv7l6" event={"ID":"77743f8d-0b5a-4341-b6a0-51b2b7b72bc3","Type":"ContainerDied","Data":"2c357d4c8c28671543a07da91b9f6335768de63e3078f484686baf4a95588e1e"} Mar 13 21:34:04 crc kubenswrapper[5029]: I0313 21:34:04.971459 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c357d4c8c28671543a07da91b9f6335768de63e3078f484686baf4a95588e1e" Mar 13 21:34:04 crc kubenswrapper[5029]: I0313 21:34:04.970965 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557294-jv7l6" Mar 13 21:34:04 crc kubenswrapper[5029]: I0313 21:34:04.973123 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn2j8\" (UniqueName: \"kubernetes.io/projected/77743f8d-0b5a-4341-b6a0-51b2b7b72bc3-kube-api-access-mn2j8\") pod \"77743f8d-0b5a-4341-b6a0-51b2b7b72bc3\" (UID: \"77743f8d-0b5a-4341-b6a0-51b2b7b72bc3\") " Mar 13 21:34:04 crc kubenswrapper[5029]: I0313 21:34:04.984187 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77743f8d-0b5a-4341-b6a0-51b2b7b72bc3-kube-api-access-mn2j8" (OuterVolumeSpecName: "kube-api-access-mn2j8") pod "77743f8d-0b5a-4341-b6a0-51b2b7b72bc3" (UID: "77743f8d-0b5a-4341-b6a0-51b2b7b72bc3"). InnerVolumeSpecName "kube-api-access-mn2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:34:05 crc kubenswrapper[5029]: I0313 21:34:05.075602 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn2j8\" (UniqueName: \"kubernetes.io/projected/77743f8d-0b5a-4341-b6a0-51b2b7b72bc3-kube-api-access-mn2j8\") on node \"crc\" DevicePath \"\"" Mar 13 21:34:05 crc kubenswrapper[5029]: I0313 21:34:05.968250 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557288-8mh4x"] Mar 13 21:34:05 crc kubenswrapper[5029]: I0313 21:34:05.978805 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557288-8mh4x"] Mar 13 21:34:06 crc kubenswrapper[5029]: I0313 21:34:06.613372 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160ab6e6-8c27-4dad-8611-1836d3c189d7" path="/var/lib/kubelet/pods/160ab6e6-8c27-4dad-8611-1836d3c189d7/volumes" Mar 13 21:34:08 crc kubenswrapper[5029]: I0313 21:34:08.604539 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:34:08 crc kubenswrapper[5029]: E0313 21:34:08.605072 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:34:14 crc kubenswrapper[5029]: I0313 21:34:14.440146 5029 scope.go:117] "RemoveContainer" containerID="14a7e010cd805d47b140dd182487aef6891acd4a64a7831a2ddb7b4485943351" Mar 13 21:34:19 crc kubenswrapper[5029]: I0313 21:34:19.681331 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:34:19 crc kubenswrapper[5029]: E0313 21:34:19.682204 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:34:30 crc kubenswrapper[5029]: I0313 21:34:30.618069 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:34:30 crc kubenswrapper[5029]: E0313 21:34:30.619048 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:34:44 crc kubenswrapper[5029]: I0313 21:34:44.604268 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:34:44 crc kubenswrapper[5029]: E0313 21:34:44.605614 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:34:59 crc kubenswrapper[5029]: I0313 21:34:59.637101 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:34:59 crc kubenswrapper[5029]: E0313 21:34:59.638920 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:35:11 crc kubenswrapper[5029]: I0313 21:35:11.599714 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:35:11 crc kubenswrapper[5029]: E0313 21:35:11.600682 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:35:25 crc kubenswrapper[5029]: I0313 21:35:25.600570 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:35:25 crc kubenswrapper[5029]: E0313 21:35:25.602633 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:35:36 crc kubenswrapper[5029]: I0313 21:35:36.599714 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:35:36 crc kubenswrapper[5029]: E0313 21:35:36.600593 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:35:47 crc kubenswrapper[5029]: I0313 21:35:47.599342 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:35:47 crc kubenswrapper[5029]: E0313 21:35:47.600255 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.159461 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557296-mjnv8"] Mar 13 21:36:00 crc kubenswrapper[5029]: E0313 21:36:00.160612 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77743f8d-0b5a-4341-b6a0-51b2b7b72bc3" containerName="oc" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.160627 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="77743f8d-0b5a-4341-b6a0-51b2b7b72bc3" containerName="oc" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.160868 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="77743f8d-0b5a-4341-b6a0-51b2b7b72bc3" containerName="oc" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.161642 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.164808 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.164815 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.166404 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.177235 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557296-mjnv8"] Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.321150 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dff2x\" (UniqueName: \"kubernetes.io/projected/49270907-f742-43aa-866e-a83f5eea76fd-kube-api-access-dff2x\") pod \"auto-csr-approver-29557296-mjnv8\" (UID: \"49270907-f742-43aa-866e-a83f5eea76fd\") " pod="openshift-infra/auto-csr-approver-29557296-mjnv8" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.424537 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dff2x\" (UniqueName: \"kubernetes.io/projected/49270907-f742-43aa-866e-a83f5eea76fd-kube-api-access-dff2x\") pod \"auto-csr-approver-29557296-mjnv8\" (UID: \"49270907-f742-43aa-866e-a83f5eea76fd\") " pod="openshift-infra/auto-csr-approver-29557296-mjnv8" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.454296 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dff2x\" (UniqueName: \"kubernetes.io/projected/49270907-f742-43aa-866e-a83f5eea76fd-kube-api-access-dff2x\") pod \"auto-csr-approver-29557296-mjnv8\" (UID: \"49270907-f742-43aa-866e-a83f5eea76fd\") " pod="openshift-infra/auto-csr-approver-29557296-mjnv8" Mar 13 21:36:00 crc kubenswrapper[5029]: I0313 21:36:00.494011 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" Mar 13 21:36:01 crc kubenswrapper[5029]: I0313 21:36:01.073472 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557296-mjnv8"] Mar 13 21:36:01 crc kubenswrapper[5029]: I0313 21:36:01.148076 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" event={"ID":"49270907-f742-43aa-866e-a83f5eea76fd","Type":"ContainerStarted","Data":"31423e38fcd3841475456541a19d794994dc37c0c88a1700d8c3a4dde164d33d"} Mar 13 21:36:01 crc kubenswrapper[5029]: I0313 21:36:01.600061 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:36:01 crc kubenswrapper[5029]: E0313 21:36:01.600361 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:36:03 crc kubenswrapper[5029]: I0313 21:36:03.181713 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" event={"ID":"49270907-f742-43aa-866e-a83f5eea76fd","Type":"ContainerStarted","Data":"11e1be8314f98cc31d2182e3907ea0d9828aa3f5568ea05ac2501e2273fe2c26"} Mar 13 21:36:03 crc kubenswrapper[5029]: I0313 21:36:03.209387 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" podStartSLOduration=2.34017056 podStartE2EDuration="3.209363933s" podCreationTimestamp="2026-03-13 21:36:00 +0000 UTC" firstStartedPulling="2026-03-13 21:36:01.082256208 +0000 UTC m=+4121.098338611" lastFinishedPulling="2026-03-13 21:36:01.951449581 +0000 UTC m=+4121.967531984" observedRunningTime="2026-03-13 21:36:03.201767974 +0000 UTC m=+4123.217850397" watchObservedRunningTime="2026-03-13 21:36:03.209363933 +0000 UTC m=+4123.225446336" Mar 13 21:36:04 crc kubenswrapper[5029]: I0313 21:36:04.196480 5029 generic.go:334] "Generic (PLEG): container finished" podID="49270907-f742-43aa-866e-a83f5eea76fd" containerID="11e1be8314f98cc31d2182e3907ea0d9828aa3f5568ea05ac2501e2273fe2c26" exitCode=0 Mar 13 21:36:04 crc kubenswrapper[5029]: I0313 21:36:04.196601 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" event={"ID":"49270907-f742-43aa-866e-a83f5eea76fd","Type":"ContainerDied","Data":"11e1be8314f98cc31d2182e3907ea0d9828aa3f5568ea05ac2501e2273fe2c26"} Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.120047 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.190108 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dff2x\" (UniqueName: \"kubernetes.io/projected/49270907-f742-43aa-866e-a83f5eea76fd-kube-api-access-dff2x\") pod \"49270907-f742-43aa-866e-a83f5eea76fd\" (UID: \"49270907-f742-43aa-866e-a83f5eea76fd\") " Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.243673 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49270907-f742-43aa-866e-a83f5eea76fd-kube-api-access-dff2x" (OuterVolumeSpecName: "kube-api-access-dff2x") pod "49270907-f742-43aa-866e-a83f5eea76fd" (UID: "49270907-f742-43aa-866e-a83f5eea76fd"). InnerVolumeSpecName "kube-api-access-dff2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.265409 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" event={"ID":"49270907-f742-43aa-866e-a83f5eea76fd","Type":"ContainerDied","Data":"31423e38fcd3841475456541a19d794994dc37c0c88a1700d8c3a4dde164d33d"} Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.265459 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31423e38fcd3841475456541a19d794994dc37c0c88a1700d8c3a4dde164d33d" Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.265548 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557296-mjnv8" Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.287335 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557290-cdvm9"] Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.299366 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557290-cdvm9"] Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.308561 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dff2x\" (UniqueName: \"kubernetes.io/projected/49270907-f742-43aa-866e-a83f5eea76fd-kube-api-access-dff2x\") on node \"crc\" DevicePath \"\"" Mar 13 21:36:06 crc kubenswrapper[5029]: I0313 21:36:06.613302 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe" path="/var/lib/kubelet/pods/ecde9149-9e0f-4a1d-bab2-b1196ff5c9fe/volumes" Mar 13 21:36:13 crc kubenswrapper[5029]: I0313 21:36:13.599817 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:36:13 crc kubenswrapper[5029]: E0313 21:36:13.600677 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:36:14 crc kubenswrapper[5029]: I0313 21:36:14.560871 5029 scope.go:117] "RemoveContainer" containerID="0bc74a3f316ff7b85e92e40dae860cd103b2fef3d06b46bf5cde611dc38d0148" Mar 13 21:36:24 crc kubenswrapper[5029]: I0313 21:36:24.599951 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:36:24 crc kubenswrapper[5029]: E0313 21:36:24.601969 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:36:38 crc kubenswrapper[5029]: I0313 21:36:38.599323 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:36:38 crc kubenswrapper[5029]: E0313 21:36:38.601627 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:36:51 crc kubenswrapper[5029]: I0313 21:36:51.599745 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:36:51 crc kubenswrapper[5029]: E0313 21:36:51.600621 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:37:06 crc kubenswrapper[5029]: I0313 21:37:06.599824 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:37:06 crc kubenswrapper[5029]: E0313 21:37:06.600641 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:37:19 crc kubenswrapper[5029]: I0313 21:37:19.600819 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:37:19 crc kubenswrapper[5029]: E0313 21:37:19.601632 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:37:31 crc kubenswrapper[5029]: I0313 21:37:31.600585 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:37:31 crc kubenswrapper[5029]: E0313 21:37:31.601992 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:37:44 crc kubenswrapper[5029]: I0313 21:37:44.600452 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:37:44 crc kubenswrapper[5029]: E0313 21:37:44.601628 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.295610 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:38:00 crc kubenswrapper[5029]: E0313 21:38:00.298749 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.352185 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557298-q7jm9"] Mar 13 21:38:00 crc kubenswrapper[5029]: E0313 21:38:00.353022 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49270907-f742-43aa-866e-a83f5eea76fd" containerName="oc" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.353141 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="49270907-f742-43aa-866e-a83f5eea76fd" containerName="oc" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.353588 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="49270907-f742-43aa-866e-a83f5eea76fd" containerName="oc" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.354463 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557298-q7jm9" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.357927 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.363924 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.364075 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.371618 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557298-q7jm9"] Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.407357 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjfp\" (UniqueName: \"kubernetes.io/projected/2ff4b0cf-2750-4227-8d11-f80a1576568b-kube-api-access-jvjfp\") pod \"auto-csr-approver-29557298-q7jm9\" (UID: \"2ff4b0cf-2750-4227-8d11-f80a1576568b\") " pod="openshift-infra/auto-csr-approver-29557298-q7jm9" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.510356 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjfp\" (UniqueName: \"kubernetes.io/projected/2ff4b0cf-2750-4227-8d11-f80a1576568b-kube-api-access-jvjfp\") pod \"auto-csr-approver-29557298-q7jm9\" (UID: \"2ff4b0cf-2750-4227-8d11-f80a1576568b\") " pod="openshift-infra/auto-csr-approver-29557298-q7jm9" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.540535 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjfp\" (UniqueName: \"kubernetes.io/projected/2ff4b0cf-2750-4227-8d11-f80a1576568b-kube-api-access-jvjfp\") pod \"auto-csr-approver-29557298-q7jm9\" (UID: \"2ff4b0cf-2750-4227-8d11-f80a1576568b\") " pod="openshift-infra/auto-csr-approver-29557298-q7jm9" Mar 13 21:38:00 crc kubenswrapper[5029]: I0313 21:38:00.695210 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557298-q7jm9" Mar 13 21:38:01 crc kubenswrapper[5029]: I0313 21:38:01.227119 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557298-q7jm9"] Mar 13 21:38:01 crc kubenswrapper[5029]: I0313 21:38:01.437687 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557298-q7jm9" event={"ID":"2ff4b0cf-2750-4227-8d11-f80a1576568b","Type":"ContainerStarted","Data":"b238d7138d1dbb1cde6eb592e8cf241e8fe127a4595c7823ae0ce5ac31783875"} Mar 13 21:38:03 crc kubenswrapper[5029]: I0313 21:38:03.477934 5029 generic.go:334] "Generic (PLEG): container finished" podID="2ff4b0cf-2750-4227-8d11-f80a1576568b" containerID="69cd77a7565abaaf2e77c67407dc46aae581964de76d681401dcc9b49aefbb9c" exitCode=0 Mar 13 21:38:03 crc kubenswrapper[5029]: I0313 21:38:03.478536 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557298-q7jm9" event={"ID":"2ff4b0cf-2750-4227-8d11-f80a1576568b","Type":"ContainerDied","Data":"69cd77a7565abaaf2e77c67407dc46aae581964de76d681401dcc9b49aefbb9c"} Mar 13 21:38:05 crc kubenswrapper[5029]: I0313 21:38:05.706731 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557298-q7jm9" Mar 13 21:38:05 crc kubenswrapper[5029]: I0313 21:38:05.839858 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvjfp\" (UniqueName: \"kubernetes.io/projected/2ff4b0cf-2750-4227-8d11-f80a1576568b-kube-api-access-jvjfp\") pod \"2ff4b0cf-2750-4227-8d11-f80a1576568b\" (UID: \"2ff4b0cf-2750-4227-8d11-f80a1576568b\") " Mar 13 21:38:05 crc kubenswrapper[5029]: I0313 21:38:05.851686 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff4b0cf-2750-4227-8d11-f80a1576568b-kube-api-access-jvjfp" (OuterVolumeSpecName: "kube-api-access-jvjfp") pod "2ff4b0cf-2750-4227-8d11-f80a1576568b" (UID: "2ff4b0cf-2750-4227-8d11-f80a1576568b"). InnerVolumeSpecName "kube-api-access-jvjfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:38:05 crc kubenswrapper[5029]: I0313 21:38:05.942252 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvjfp\" (UniqueName: \"kubernetes.io/projected/2ff4b0cf-2750-4227-8d11-f80a1576568b-kube-api-access-jvjfp\") on node \"crc\" DevicePath \"\"" Mar 13 21:38:06 crc kubenswrapper[5029]: I0313 21:38:06.504518 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557298-q7jm9" event={"ID":"2ff4b0cf-2750-4227-8d11-f80a1576568b","Type":"ContainerDied","Data":"b238d7138d1dbb1cde6eb592e8cf241e8fe127a4595c7823ae0ce5ac31783875"} Mar 13 21:38:06 crc kubenswrapper[5029]: I0313 21:38:06.504772 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b238d7138d1dbb1cde6eb592e8cf241e8fe127a4595c7823ae0ce5ac31783875" Mar 13 21:38:06 crc kubenswrapper[5029]: I0313 21:38:06.504587 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557298-q7jm9" Mar 13 21:38:06 crc kubenswrapper[5029]: I0313 21:38:06.790297 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557292-wxrv4"] Mar 13 21:38:06 crc kubenswrapper[5029]: I0313 21:38:06.799367 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557292-wxrv4"] Mar 13 21:38:08 crc kubenswrapper[5029]: I0313 21:38:08.613126 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9cdcb3c-ecdb-4b28-977d-cb95867968b9" path="/var/lib/kubelet/pods/e9cdcb3c-ecdb-4b28-977d-cb95867968b9/volumes" Mar 13 21:38:14 crc kubenswrapper[5029]: I0313 21:38:14.611252 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:38:14 crc kubenswrapper[5029]: I0313 21:38:14.727169 5029 scope.go:117] "RemoveContainer" containerID="12a5cb04f3a8200305a7b0a7611cc80080f4126c72698ddd755cb37fe2dbdf0f" Mar 13 21:38:15 crc kubenswrapper[5029]: I0313 21:38:15.679930 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"449c79024f27b1eca0c3dd6e13388b325187db7167c402eecbe9ac1b3ab04370"} Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.162184 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557300-fcv76"] Mar 13 21:40:00 crc kubenswrapper[5029]: E0313 21:40:00.163801 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff4b0cf-2750-4227-8d11-f80a1576568b" containerName="oc" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.163818 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff4b0cf-2750-4227-8d11-f80a1576568b" containerName="oc" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.164088 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff4b0cf-2750-4227-8d11-f80a1576568b" containerName="oc" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.165077 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557300-fcv76" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.168520 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.168708 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.170638 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.178138 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557300-fcv76"] Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.220250 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r5v5\" (UniqueName: \"kubernetes.io/projected/142238c8-8b74-4ec9-a06b-ac4acef624fe-kube-api-access-5r5v5\") pod \"auto-csr-approver-29557300-fcv76\" (UID: \"142238c8-8b74-4ec9-a06b-ac4acef624fe\") " pod="openshift-infra/auto-csr-approver-29557300-fcv76" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.323029 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r5v5\" (UniqueName: \"kubernetes.io/projected/142238c8-8b74-4ec9-a06b-ac4acef624fe-kube-api-access-5r5v5\") pod \"auto-csr-approver-29557300-fcv76\" (UID: \"142238c8-8b74-4ec9-a06b-ac4acef624fe\") " pod="openshift-infra/auto-csr-approver-29557300-fcv76" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.352523 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r5v5\" (UniqueName: \"kubernetes.io/projected/142238c8-8b74-4ec9-a06b-ac4acef624fe-kube-api-access-5r5v5\") pod \"auto-csr-approver-29557300-fcv76\" (UID: \"142238c8-8b74-4ec9-a06b-ac4acef624fe\") " pod="openshift-infra/auto-csr-approver-29557300-fcv76" Mar 13 21:40:00 crc kubenswrapper[5029]: I0313 21:40:00.522679 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557300-fcv76" Mar 13 21:40:01 crc kubenswrapper[5029]: I0313 21:40:01.084992 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557300-fcv76"] Mar 13 21:40:01 crc kubenswrapper[5029]: I0313 21:40:01.116480 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:40:01 crc kubenswrapper[5029]: I0313 21:40:01.748836 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557300-fcv76" event={"ID":"142238c8-8b74-4ec9-a06b-ac4acef624fe","Type":"ContainerStarted","Data":"2d86b5e03337bece3e484f01f6b7b2e4ad8585b48ba5ff2e4329ed897a18b4c8"} Mar 13 21:40:03 crc kubenswrapper[5029]: I0313 21:40:03.770075 5029 generic.go:334] "Generic (PLEG): container finished" podID="142238c8-8b74-4ec9-a06b-ac4acef624fe" containerID="9ea8fdcb12dea2ec5c0950f5b2eee5f006a52611aef205e8f6f66c1306682da5" exitCode=0 Mar 13 21:40:03 crc kubenswrapper[5029]: I0313 21:40:03.770596 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557300-fcv76" event={"ID":"142238c8-8b74-4ec9-a06b-ac4acef624fe","Type":"ContainerDied","Data":"9ea8fdcb12dea2ec5c0950f5b2eee5f006a52611aef205e8f6f66c1306682da5"} Mar 13 21:40:05 crc kubenswrapper[5029]: I0313 21:40:05.474589 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557300-fcv76" Mar 13 21:40:05 crc kubenswrapper[5029]: I0313 21:40:05.567972 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r5v5\" (UniqueName: \"kubernetes.io/projected/142238c8-8b74-4ec9-a06b-ac4acef624fe-kube-api-access-5r5v5\") pod \"142238c8-8b74-4ec9-a06b-ac4acef624fe\" (UID: \"142238c8-8b74-4ec9-a06b-ac4acef624fe\") " Mar 13 21:40:05 crc kubenswrapper[5029]: I0313 21:40:05.596651 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142238c8-8b74-4ec9-a06b-ac4acef624fe-kube-api-access-5r5v5" (OuterVolumeSpecName: "kube-api-access-5r5v5") pod "142238c8-8b74-4ec9-a06b-ac4acef624fe" (UID: "142238c8-8b74-4ec9-a06b-ac4acef624fe"). InnerVolumeSpecName "kube-api-access-5r5v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:40:05 crc kubenswrapper[5029]: I0313 21:40:05.671760 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r5v5\" (UniqueName: \"kubernetes.io/projected/142238c8-8b74-4ec9-a06b-ac4acef624fe-kube-api-access-5r5v5\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:05 crc kubenswrapper[5029]: I0313 21:40:05.795695 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557300-fcv76" event={"ID":"142238c8-8b74-4ec9-a06b-ac4acef624fe","Type":"ContainerDied","Data":"2d86b5e03337bece3e484f01f6b7b2e4ad8585b48ba5ff2e4329ed897a18b4c8"} Mar 13 21:40:05 crc kubenswrapper[5029]: I0313 21:40:05.796052 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557300-fcv76" Mar 13 21:40:05 crc kubenswrapper[5029]: I0313 21:40:05.796084 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d86b5e03337bece3e484f01f6b7b2e4ad8585b48ba5ff2e4329ed897a18b4c8" Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.558411 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557294-jv7l6"] Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.570736 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557294-jv7l6"] Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.614952 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77743f8d-0b5a-4341-b6a0-51b2b7b72bc3" path="/var/lib/kubelet/pods/77743f8d-0b5a-4341-b6a0-51b2b7b72bc3/volumes" Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.832144 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zksvx"] Mar 13 21:40:06 crc kubenswrapper[5029]: E0313 21:40:06.832705 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142238c8-8b74-4ec9-a06b-ac4acef624fe" containerName="oc" Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.832736 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="142238c8-8b74-4ec9-a06b-ac4acef624fe" containerName="oc" Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.833034 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="142238c8-8b74-4ec9-a06b-ac4acef624fe" containerName="oc" Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.834876 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.851193 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zksvx"] Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.907621 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcr4\" (UniqueName: \"kubernetes.io/projected/db37dfcc-e42a-42fc-8734-4b9111b91b8b-kube-api-access-jbcr4\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.907708 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-catalog-content\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:06 crc kubenswrapper[5029]: I0313 21:40:06.907815 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-utilities\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.010307 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcr4\" (UniqueName: \"kubernetes.io/projected/db37dfcc-e42a-42fc-8734-4b9111b91b8b-kube-api-access-jbcr4\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.010408 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-catalog-content\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.010594 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-utilities\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.011023 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-catalog-content\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.011052 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-utilities\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.042491 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcr4\" (UniqueName: \"kubernetes.io/projected/db37dfcc-e42a-42fc-8734-4b9111b91b8b-kube-api-access-jbcr4\") pod \"redhat-marketplace-zksvx\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.173914 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.721338 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zksvx"] Mar 13 21:40:07 crc kubenswrapper[5029]: I0313 21:40:07.825921 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zksvx" event={"ID":"db37dfcc-e42a-42fc-8734-4b9111b91b8b","Type":"ContainerStarted","Data":"ce429f663a387f788f7594a7a7f0973f7d45fb0c6e2a01f0639594eb9296877a"} Mar 13 21:40:08 crc kubenswrapper[5029]: I0313 21:40:08.840055 5029 generic.go:334] "Generic (PLEG): container finished" podID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerID="7498f15d3ef96c38dd2c5c1fbbfbbd54db9b9c88936ec6317023d791a6ce3379" exitCode=0 Mar 13 21:40:08 crc kubenswrapper[5029]: I0313 21:40:08.840184 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zksvx" event={"ID":"db37dfcc-e42a-42fc-8734-4b9111b91b8b","Type":"ContainerDied","Data":"7498f15d3ef96c38dd2c5c1fbbfbbd54db9b9c88936ec6317023d791a6ce3379"} Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.241799 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ghw5p"] Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.244675 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.259102 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ghw5p"] Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.379188 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-catalog-content\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.379338 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-utilities\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.379439 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j56wc\" (UniqueName: \"kubernetes.io/projected/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-kube-api-access-j56wc\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.481333 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-utilities\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.481444 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j56wc\" (UniqueName: \"kubernetes.io/projected/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-kube-api-access-j56wc\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.481529 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-catalog-content\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.482040 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-utilities\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.482066 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-catalog-content\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.506734 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j56wc\" (UniqueName: \"kubernetes.io/projected/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-kube-api-access-j56wc\") pod \"community-operators-ghw5p\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.575654 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.848107 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7m54x"] Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.851199 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:09 crc kubenswrapper[5029]: I0313 21:40:09.864779 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m54x"] Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.005583 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdbfc\" (UniqueName: \"kubernetes.io/projected/faa5053c-9d62-4861-8d5d-50b84294d5ad-kube-api-access-zdbfc\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.008098 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-catalog-content\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.008225 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-utilities\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.113926 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-catalog-content\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.114533 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-utilities\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.114732 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdbfc\" (UniqueName: \"kubernetes.io/projected/faa5053c-9d62-4861-8d5d-50b84294d5ad-kube-api-access-zdbfc\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.116060 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-catalog-content\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.118008 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-utilities\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.141782 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdbfc\" (UniqueName: \"kubernetes.io/projected/faa5053c-9d62-4861-8d5d-50b84294d5ad-kube-api-access-zdbfc\") pod \"redhat-operators-7m54x\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.259404 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.320222 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ghw5p"] Mar 13 21:40:10 crc kubenswrapper[5029]: W0313 21:40:10.334331 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb320d7a7_2bd6_49ce_9a92_3b742510ccf7.slice/crio-496eff0b7125f50847ccddf88989c68713aae09e1ac8efcb1877f32b3e63ae6b WatchSource:0}: Error finding container 496eff0b7125f50847ccddf88989c68713aae09e1ac8efcb1877f32b3e63ae6b: Status 404 returned error can't find the container with id 496eff0b7125f50847ccddf88989c68713aae09e1ac8efcb1877f32b3e63ae6b Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.824555 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m54x"] Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.891401 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m54x" event={"ID":"faa5053c-9d62-4861-8d5d-50b84294d5ad","Type":"ContainerStarted","Data":"99ec84db0d78f84dff556b6e8e9861ec6c3ad3b8b1beeb2286dea4e46657daa7"} Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.896840 5029 generic.go:334] "Generic (PLEG): container finished" podID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerID="aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4" exitCode=0 Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.897120 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ghw5p" event={"ID":"b320d7a7-2bd6-49ce-9a92-3b742510ccf7","Type":"ContainerDied","Data":"aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4"} Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.897241 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ghw5p" event={"ID":"b320d7a7-2bd6-49ce-9a92-3b742510ccf7","Type":"ContainerStarted","Data":"496eff0b7125f50847ccddf88989c68713aae09e1ac8efcb1877f32b3e63ae6b"} Mar 13 21:40:10 crc kubenswrapper[5029]: I0313 21:40:10.910555 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zksvx" event={"ID":"db37dfcc-e42a-42fc-8734-4b9111b91b8b","Type":"ContainerStarted","Data":"6c8279a99fff2db79d3a69d6b7e6cac6d9e7628bbb3e7eb24cb0f12aa9f04161"} Mar 13 21:40:11 crc kubenswrapper[5029]: I0313 21:40:11.922517 5029 generic.go:334] "Generic (PLEG): container finished" podID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerID="6c8279a99fff2db79d3a69d6b7e6cac6d9e7628bbb3e7eb24cb0f12aa9f04161" exitCode=0 Mar 13 21:40:11 crc kubenswrapper[5029]: I0313 21:40:11.922620 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zksvx" event={"ID":"db37dfcc-e42a-42fc-8734-4b9111b91b8b","Type":"ContainerDied","Data":"6c8279a99fff2db79d3a69d6b7e6cac6d9e7628bbb3e7eb24cb0f12aa9f04161"} Mar 13 21:40:11 crc kubenswrapper[5029]: I0313 21:40:11.927986 5029 generic.go:334] "Generic (PLEG): container finished" podID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerID="73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11" exitCode=0 Mar 13 21:40:11 crc kubenswrapper[5029]: I0313 21:40:11.928031 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m54x" event={"ID":"faa5053c-9d62-4861-8d5d-50b84294d5ad","Type":"ContainerDied","Data":"73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11"} Mar 13 21:40:11 crc kubenswrapper[5029]: I0313 21:40:11.941294 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ghw5p" event={"ID":"b320d7a7-2bd6-49ce-9a92-3b742510ccf7","Type":"ContainerStarted","Data":"140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19"} Mar 13 21:40:12 crc kubenswrapper[5029]: I0313 21:40:12.952056 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m54x" event={"ID":"faa5053c-9d62-4861-8d5d-50b84294d5ad","Type":"ContainerStarted","Data":"2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30"} Mar 13 21:40:12 crc kubenswrapper[5029]: I0313 21:40:12.955478 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zksvx" event={"ID":"db37dfcc-e42a-42fc-8734-4b9111b91b8b","Type":"ContainerStarted","Data":"9f136431d0f0707d0967e8e0995e87ba05c78c5e23d1c13845129d6ba3ccdfd8"} Mar 13 21:40:12 crc kubenswrapper[5029]: I0313 21:40:12.958510 5029 generic.go:334] "Generic (PLEG): container finished" podID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerID="140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19" exitCode=0 Mar 13 21:40:12 crc kubenswrapper[5029]: I0313 21:40:12.958579 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ghw5p" event={"ID":"b320d7a7-2bd6-49ce-9a92-3b742510ccf7","Type":"ContainerDied","Data":"140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19"} Mar 13 21:40:13 crc kubenswrapper[5029]: I0313 21:40:13.051730 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zksvx" podStartSLOduration=3.539443 podStartE2EDuration="7.051699661s" podCreationTimestamp="2026-03-13 21:40:06 +0000 UTC" firstStartedPulling="2026-03-13 21:40:08.842971069 +0000 UTC m=+4368.859053472" lastFinishedPulling="2026-03-13 21:40:12.35522773 +0000 UTC m=+4372.371310133" observedRunningTime="2026-03-13 21:40:13.032155985 +0000 UTC m=+4373.048238388" watchObservedRunningTime="2026-03-13 21:40:13.051699661 +0000 UTC m=+4373.067782054" Mar 13 21:40:13 crc kubenswrapper[5029]: I0313 21:40:13.971170 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ghw5p" event={"ID":"b320d7a7-2bd6-49ce-9a92-3b742510ccf7","Type":"ContainerStarted","Data":"b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7"} Mar 13 21:40:13 crc kubenswrapper[5029]: I0313 21:40:13.997196 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ghw5p" podStartSLOduration=2.246895107 podStartE2EDuration="4.99717531s" podCreationTimestamp="2026-03-13 21:40:09 +0000 UTC" firstStartedPulling="2026-03-13 21:40:10.899700129 +0000 UTC m=+4370.915782532" lastFinishedPulling="2026-03-13 21:40:13.649980332 +0000 UTC m=+4373.666062735" observedRunningTime="2026-03-13 21:40:13.994975819 +0000 UTC m=+4374.011058222" watchObservedRunningTime="2026-03-13 21:40:13.99717531 +0000 UTC m=+4374.013257713" Mar 13 21:40:14 crc kubenswrapper[5029]: I0313 21:40:14.964040 5029 scope.go:117] "RemoveContainer" containerID="42750c706dfa3dc5522ac0cba5f236fde4489739c1ff1803cbaa58bb708542be" Mar 13 21:40:17 crc kubenswrapper[5029]: I0313 21:40:17.174465 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:17 crc kubenswrapper[5029]: I0313 21:40:17.174803 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:17 crc kubenswrapper[5029]: I0313 21:40:17.231706 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:18 crc kubenswrapper[5029]: I0313 21:40:18.089091 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:18 crc kubenswrapper[5029]: I0313 21:40:18.827485 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zksvx"] Mar 13 21:40:19 crc kubenswrapper[5029]: I0313 21:40:19.576032 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:19 crc kubenswrapper[5029]: I0313 21:40:19.576333 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:19 crc kubenswrapper[5029]: I0313 21:40:19.641139 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:20 crc kubenswrapper[5029]: I0313 21:40:20.039403 5029 generic.go:334] "Generic (PLEG): container finished" podID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerID="2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30" exitCode=0 Mar 13 21:40:20 crc kubenswrapper[5029]: I0313 21:40:20.039798 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m54x" event={"ID":"faa5053c-9d62-4861-8d5d-50b84294d5ad","Type":"ContainerDied","Data":"2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30"} Mar 13 21:40:20 crc kubenswrapper[5029]: I0313 21:40:20.040798 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zksvx" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerName="registry-server" containerID="cri-o://9f136431d0f0707d0967e8e0995e87ba05c78c5e23d1c13845129d6ba3ccdfd8" gracePeriod=2 Mar 13 21:40:20 crc kubenswrapper[5029]: I0313 21:40:20.102513 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.053208 5029 generic.go:334] "Generic (PLEG): container finished" podID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerID="9f136431d0f0707d0967e8e0995e87ba05c78c5e23d1c13845129d6ba3ccdfd8" exitCode=0 Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.053310 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zksvx" event={"ID":"db37dfcc-e42a-42fc-8734-4b9111b91b8b","Type":"ContainerDied","Data":"9f136431d0f0707d0967e8e0995e87ba05c78c5e23d1c13845129d6ba3ccdfd8"} Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.053970 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zksvx" event={"ID":"db37dfcc-e42a-42fc-8734-4b9111b91b8b","Type":"ContainerDied","Data":"ce429f663a387f788f7594a7a7f0973f7d45fb0c6e2a01f0639594eb9296877a"} Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.053995 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce429f663a387f788f7594a7a7f0973f7d45fb0c6e2a01f0639594eb9296877a" Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.484475 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.559054 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-utilities\") pod \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.559154 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbcr4\" (UniqueName: \"kubernetes.io/projected/db37dfcc-e42a-42fc-8734-4b9111b91b8b-kube-api-access-jbcr4\") pod \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.559401 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-catalog-content\") pod \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\" (UID: \"db37dfcc-e42a-42fc-8734-4b9111b91b8b\") " Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.560100 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-utilities" (OuterVolumeSpecName: "utilities") pod "db37dfcc-e42a-42fc-8734-4b9111b91b8b" (UID: "db37dfcc-e42a-42fc-8734-4b9111b91b8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.567007 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db37dfcc-e42a-42fc-8734-4b9111b91b8b-kube-api-access-jbcr4" (OuterVolumeSpecName: "kube-api-access-jbcr4") pod "db37dfcc-e42a-42fc-8734-4b9111b91b8b" (UID: "db37dfcc-e42a-42fc-8734-4b9111b91b8b"). InnerVolumeSpecName "kube-api-access-jbcr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.590959 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db37dfcc-e42a-42fc-8734-4b9111b91b8b" (UID: "db37dfcc-e42a-42fc-8734-4b9111b91b8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.662782 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.663168 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbcr4\" (UniqueName: \"kubernetes.io/projected/db37dfcc-e42a-42fc-8734-4b9111b91b8b-kube-api-access-jbcr4\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:21 crc kubenswrapper[5029]: I0313 21:40:21.663189 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db37dfcc-e42a-42fc-8734-4b9111b91b8b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:22 crc kubenswrapper[5029]: I0313 21:40:22.028359 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ghw5p"] Mar 13 21:40:22 crc kubenswrapper[5029]: I0313 21:40:22.068709 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m54x" event={"ID":"faa5053c-9d62-4861-8d5d-50b84294d5ad","Type":"ContainerStarted","Data":"509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2"} Mar 13 21:40:22 crc kubenswrapper[5029]: I0313 21:40:22.069717 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zksvx" Mar 13 21:40:22 crc kubenswrapper[5029]: I0313 21:40:22.103085 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7m54x" podStartSLOduration=4.545856518 podStartE2EDuration="13.103062183s" podCreationTimestamp="2026-03-13 21:40:09 +0000 UTC" firstStartedPulling="2026-03-13 21:40:11.929799867 +0000 UTC m=+4371.945882270" lastFinishedPulling="2026-03-13 21:40:20.487005532 +0000 UTC m=+4380.503087935" observedRunningTime="2026-03-13 21:40:22.090701464 +0000 UTC m=+4382.106783877" watchObservedRunningTime="2026-03-13 21:40:22.103062183 +0000 UTC m=+4382.119144606" Mar 13 21:40:22 crc kubenswrapper[5029]: I0313 21:40:22.119688 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zksvx"] Mar 13 21:40:22 crc kubenswrapper[5029]: I0313 21:40:22.129213 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zksvx"] Mar 13 21:40:22 crc kubenswrapper[5029]: I0313 21:40:22.614065 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" path="/var/lib/kubelet/pods/db37dfcc-e42a-42fc-8734-4b9111b91b8b/volumes" Mar 13 21:40:23 crc kubenswrapper[5029]: I0313 21:40:23.082090 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ghw5p" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerName="registry-server" containerID="cri-o://b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7" gracePeriod=2 Mar 13 21:40:23 crc kubenswrapper[5029]: I0313 21:40:23.801483 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:23 crc kubenswrapper[5029]: I0313 21:40:23.919050 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j56wc\" (UniqueName: \"kubernetes.io/projected/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-kube-api-access-j56wc\") pod \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " Mar 13 21:40:23 crc kubenswrapper[5029]: I0313 21:40:23.919308 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-utilities\") pod \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " Mar 13 21:40:23 crc kubenswrapper[5029]: I0313 21:40:23.919387 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-catalog-content\") pod \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\" (UID: \"b320d7a7-2bd6-49ce-9a92-3b742510ccf7\") " Mar 13 21:40:23 crc kubenswrapper[5029]: I0313 21:40:23.922574 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-utilities" (OuterVolumeSpecName: "utilities") pod "b320d7a7-2bd6-49ce-9a92-3b742510ccf7" (UID: "b320d7a7-2bd6-49ce-9a92-3b742510ccf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:40:23 crc kubenswrapper[5029]: I0313 21:40:23.925346 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-kube-api-access-j56wc" (OuterVolumeSpecName: "kube-api-access-j56wc") pod "b320d7a7-2bd6-49ce-9a92-3b742510ccf7" (UID: "b320d7a7-2bd6-49ce-9a92-3b742510ccf7"). InnerVolumeSpecName "kube-api-access-j56wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:40:23 crc kubenswrapper[5029]: I0313 21:40:23.974797 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b320d7a7-2bd6-49ce-9a92-3b742510ccf7" (UID: "b320d7a7-2bd6-49ce-9a92-3b742510ccf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.022373 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.022409 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.022422 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j56wc\" (UniqueName: \"kubernetes.io/projected/b320d7a7-2bd6-49ce-9a92-3b742510ccf7-kube-api-access-j56wc\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.092166 5029 generic.go:334] "Generic (PLEG): container finished" podID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerID="b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7" exitCode=0 Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.092250 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ghw5p" event={"ID":"b320d7a7-2bd6-49ce-9a92-3b742510ccf7","Type":"ContainerDied","Data":"b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7"} Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.093394 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ghw5p" event={"ID":"b320d7a7-2bd6-49ce-9a92-3b742510ccf7","Type":"ContainerDied","Data":"496eff0b7125f50847ccddf88989c68713aae09e1ac8efcb1877f32b3e63ae6b"} Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.092351 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ghw5p" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.093472 5029 scope.go:117] "RemoveContainer" containerID="b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.115778 5029 scope.go:117] "RemoveContainer" containerID="140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.130986 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ghw5p"] Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.140238 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ghw5p"] Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.154100 5029 scope.go:117] "RemoveContainer" containerID="aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.221191 5029 scope.go:117] "RemoveContainer" containerID="b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7" Mar 13 21:40:24 crc kubenswrapper[5029]: E0313 21:40:24.223071 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7\": container with ID starting with b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7 not found: ID does not exist" containerID="b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.223220 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7"} err="failed to get container status \"b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7\": rpc error: code = NotFound desc = could not find container \"b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7\": container with ID starting with b5ccf72652fb7f098220d94cbb6f70671b36c129e5c22afec30231021ace20c7 not found: ID does not exist" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.223327 5029 scope.go:117] "RemoveContainer" containerID="140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19" Mar 13 21:40:24 crc kubenswrapper[5029]: E0313 21:40:24.223769 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19\": container with ID starting with 140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19 not found: ID does not exist" containerID="140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.223928 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19"} err="failed to get container status \"140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19\": rpc error: code = NotFound desc = could not find container \"140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19\": container with ID starting with 140e0fc534e1b5fb7e98c97ed51a0194d13b2b5e053a13858035ea8e0e19ef19 not found: ID does not exist" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.224032 5029 scope.go:117] "RemoveContainer" containerID="aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4" Mar 13 21:40:24 crc kubenswrapper[5029]: E0313 21:40:24.229659 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4\": container with ID starting with aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4 not found: ID does not exist" containerID="aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.229964 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4"} err="failed to get container status \"aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4\": rpc error: code = NotFound desc = could not find container \"aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4\": container with ID starting with aaedc18b4600382ff3048888c8a50f8f2727ca61e963fee5fbfaad92d0d1d5f4 not found: ID does not exist" Mar 13 21:40:24 crc kubenswrapper[5029]: I0313 21:40:24.613519 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" path="/var/lib/kubelet/pods/b320d7a7-2bd6-49ce-9a92-3b742510ccf7/volumes" Mar 13 21:40:30 crc kubenswrapper[5029]: I0313 21:40:30.260673 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:30 crc kubenswrapper[5029]: I0313 21:40:30.261265 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:30 crc kubenswrapper[5029]: I0313 21:40:30.506706 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:31 crc kubenswrapper[5029]: I0313 21:40:31.330048 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:31 crc kubenswrapper[5029]: I0313 21:40:31.390563 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7m54x"] Mar 13 21:40:31 crc kubenswrapper[5029]: I0313 21:40:31.950238 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:40:31 crc kubenswrapper[5029]: I0313 21:40:31.950359 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:40:33 crc kubenswrapper[5029]: I0313 21:40:33.189257 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7m54x" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerName="registry-server" containerID="cri-o://509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2" gracePeriod=2 Mar 13 21:40:33 crc kubenswrapper[5029]: I0313 21:40:33.957875 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.054418 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-utilities\") pod \"faa5053c-9d62-4861-8d5d-50b84294d5ad\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.054927 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-catalog-content\") pod \"faa5053c-9d62-4861-8d5d-50b84294d5ad\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.055211 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdbfc\" (UniqueName: \"kubernetes.io/projected/faa5053c-9d62-4861-8d5d-50b84294d5ad-kube-api-access-zdbfc\") pod \"faa5053c-9d62-4861-8d5d-50b84294d5ad\" (UID: \"faa5053c-9d62-4861-8d5d-50b84294d5ad\") " Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.055318 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-utilities" (OuterVolumeSpecName: "utilities") pod "faa5053c-9d62-4861-8d5d-50b84294d5ad" (UID: "faa5053c-9d62-4861-8d5d-50b84294d5ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.055811 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.061448 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa5053c-9d62-4861-8d5d-50b84294d5ad-kube-api-access-zdbfc" (OuterVolumeSpecName: "kube-api-access-zdbfc") pod "faa5053c-9d62-4861-8d5d-50b84294d5ad" (UID: "faa5053c-9d62-4861-8d5d-50b84294d5ad"). InnerVolumeSpecName "kube-api-access-zdbfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.158758 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdbfc\" (UniqueName: \"kubernetes.io/projected/faa5053c-9d62-4861-8d5d-50b84294d5ad-kube-api-access-zdbfc\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.195026 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faa5053c-9d62-4861-8d5d-50b84294d5ad" (UID: "faa5053c-9d62-4861-8d5d-50b84294d5ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.201352 5029 generic.go:334] "Generic (PLEG): container finished" podID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerID="509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2" exitCode=0 Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.201391 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m54x" event={"ID":"faa5053c-9d62-4861-8d5d-50b84294d5ad","Type":"ContainerDied","Data":"509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2"} Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.201423 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m54x" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.201445 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m54x" event={"ID":"faa5053c-9d62-4861-8d5d-50b84294d5ad","Type":"ContainerDied","Data":"99ec84db0d78f84dff556b6e8e9861ec6c3ad3b8b1beeb2286dea4e46657daa7"} Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.201467 5029 scope.go:117] "RemoveContainer" containerID="509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.226241 5029 scope.go:117] "RemoveContainer" containerID="2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.250950 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7m54x"] Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.261553 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa5053c-9d62-4861-8d5d-50b84294d5ad-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.265005 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7m54x"] Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.269727 5029 scope.go:117] "RemoveContainer" containerID="73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.309951 5029 scope.go:117] "RemoveContainer" containerID="509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2" Mar 13 21:40:34 crc kubenswrapper[5029]: E0313 21:40:34.310497 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2\": container with ID starting with 509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2 not found: ID does not exist" containerID="509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.310538 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2"} err="failed to get container status \"509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2\": rpc error: code = NotFound desc = could not find container \"509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2\": container with ID starting with 509b1e6e3604d198fbfbcc7d5e13343956d8296b6fd9a7c17684046bf2e0afe2 not found: ID does not exist" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.310561 5029 scope.go:117] "RemoveContainer" containerID="2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30" Mar 13 21:40:34 crc kubenswrapper[5029]: E0313 21:40:34.311121 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30\": container with ID starting with 2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30 not found: ID does not exist" containerID="2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.311148 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30"} err="failed to get container status \"2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30\": rpc error: code = NotFound desc = could not find container \"2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30\": container with ID starting with 2e8cd4a0b035b4be3bc7261065adf9875f92c01978f7829d10d2dfc5c6865d30 not found: ID does not exist" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.311164 5029 scope.go:117] "RemoveContainer" containerID="73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11" Mar 13 21:40:34 crc kubenswrapper[5029]: E0313 21:40:34.311629 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11\": container with ID starting with 73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11 not found: ID does not exist" containerID="73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.311662 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11"} err="failed to get container status \"73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11\": rpc error: code = NotFound desc = could not find container \"73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11\": container with ID starting with 73e1eb19e40c7ffe0b8695ac1b203b76b5e1d32dbf293ebf97a16c3fe3902b11 not found: ID does not exist" Mar 13 21:40:34 crc kubenswrapper[5029]: I0313 21:40:34.612654 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" path="/var/lib/kubelet/pods/faa5053c-9d62-4861-8d5d-50b84294d5ad/volumes" Mar 13 21:41:01 crc kubenswrapper[5029]: I0313 21:41:01.950898 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:41:01 crc kubenswrapper[5029]: I0313 21:41:01.951471 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:41:31 crc kubenswrapper[5029]: I0313 21:41:31.950344 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:41:31 crc kubenswrapper[5029]: I0313 21:41:31.951089 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:41:31 crc kubenswrapper[5029]: I0313 21:41:31.951141 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:41:31 crc kubenswrapper[5029]: I0313 21:41:31.952055 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"449c79024f27b1eca0c3dd6e13388b325187db7167c402eecbe9ac1b3ab04370"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:41:31 crc kubenswrapper[5029]: I0313 21:41:31.952110 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://449c79024f27b1eca0c3dd6e13388b325187db7167c402eecbe9ac1b3ab04370" gracePeriod=600 Mar 13 21:41:32 crc kubenswrapper[5029]: I0313 21:41:32.803656 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="449c79024f27b1eca0c3dd6e13388b325187db7167c402eecbe9ac1b3ab04370" exitCode=0 Mar 13 21:41:32 crc kubenswrapper[5029]: I0313 21:41:32.803731 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"449c79024f27b1eca0c3dd6e13388b325187db7167c402eecbe9ac1b3ab04370"} Mar 13 21:41:32 crc kubenswrapper[5029]: I0313 21:41:32.804202 5029 scope.go:117] "RemoveContainer" containerID="28f643dc0516dd30620cbc04c49f588796a49c34812b1afef9bfc78db903b068" Mar 13 21:41:33 crc kubenswrapper[5029]: I0313 21:41:33.819435 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53"} Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.158712 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557302-zqhf8"] Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.159798 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerName="extract-utilities" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.159814 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerName="extract-utilities" Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.159827 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.159836 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.159871 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerName="extract-content" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.159882 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerName="extract-content" Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.159911 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.159919 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.159937 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerName="extract-content" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.159944 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerName="extract-content" Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.159957 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerName="extract-content" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.159964 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerName="extract-content" Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.159983 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerName="extract-utilities" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.159989 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerName="extract-utilities" Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.160010 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerName="extract-utilities" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.160018 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerName="extract-utilities" Mar 13 21:42:00 crc kubenswrapper[5029]: E0313 21:42:00.160030 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.160037 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.160250 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="db37dfcc-e42a-42fc-8734-4b9111b91b8b" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.160272 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5053c-9d62-4861-8d5d-50b84294d5ad" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.160294 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="b320d7a7-2bd6-49ce-9a92-3b742510ccf7" containerName="registry-server" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.161220 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557302-zqhf8" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.163798 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.164014 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.165747 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.173392 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557302-zqhf8"] Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.291531 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pfnz\" (UniqueName: \"kubernetes.io/projected/b5c282f5-bbd2-47b4-86ad-6a6b19de890b-kube-api-access-4pfnz\") pod \"auto-csr-approver-29557302-zqhf8\" (UID: \"b5c282f5-bbd2-47b4-86ad-6a6b19de890b\") " pod="openshift-infra/auto-csr-approver-29557302-zqhf8" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.394041 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pfnz\" (UniqueName: \"kubernetes.io/projected/b5c282f5-bbd2-47b4-86ad-6a6b19de890b-kube-api-access-4pfnz\") pod \"auto-csr-approver-29557302-zqhf8\" (UID: \"b5c282f5-bbd2-47b4-86ad-6a6b19de890b\") " pod="openshift-infra/auto-csr-approver-29557302-zqhf8" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.420267 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pfnz\" (UniqueName: \"kubernetes.io/projected/b5c282f5-bbd2-47b4-86ad-6a6b19de890b-kube-api-access-4pfnz\") pod \"auto-csr-approver-29557302-zqhf8\" (UID: \"b5c282f5-bbd2-47b4-86ad-6a6b19de890b\") " pod="openshift-infra/auto-csr-approver-29557302-zqhf8" Mar 13 21:42:00 crc kubenswrapper[5029]: I0313 21:42:00.488668 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557302-zqhf8" Mar 13 21:42:01 crc kubenswrapper[5029]: I0313 21:42:01.056052 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557302-zqhf8"] Mar 13 21:42:01 crc kubenswrapper[5029]: I0313 21:42:01.089062 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557302-zqhf8" event={"ID":"b5c282f5-bbd2-47b4-86ad-6a6b19de890b","Type":"ContainerStarted","Data":"83a12f3d86184673ac3602735ab84f2196966afcf6089be8d1a5391f43bc9d14"} Mar 13 21:42:03 crc kubenswrapper[5029]: I0313 21:42:03.110202 5029 generic.go:334] "Generic (PLEG): container finished" podID="b5c282f5-bbd2-47b4-86ad-6a6b19de890b" containerID="8d117b8feb68a44e5429586c6614ad55d381a01ed9e3e7167ad25da3366550ea" exitCode=0 Mar 13 21:42:03 crc kubenswrapper[5029]: I0313 21:42:03.110337 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557302-zqhf8" event={"ID":"b5c282f5-bbd2-47b4-86ad-6a6b19de890b","Type":"ContainerDied","Data":"8d117b8feb68a44e5429586c6614ad55d381a01ed9e3e7167ad25da3366550ea"} Mar 13 21:42:04 crc kubenswrapper[5029]: I0313 21:42:04.710196 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557302-zqhf8" Mar 13 21:42:04 crc kubenswrapper[5029]: I0313 21:42:04.896612 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pfnz\" (UniqueName: \"kubernetes.io/projected/b5c282f5-bbd2-47b4-86ad-6a6b19de890b-kube-api-access-4pfnz\") pod \"b5c282f5-bbd2-47b4-86ad-6a6b19de890b\" (UID: \"b5c282f5-bbd2-47b4-86ad-6a6b19de890b\") " Mar 13 21:42:04 crc kubenswrapper[5029]: I0313 21:42:04.912988 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c282f5-bbd2-47b4-86ad-6a6b19de890b-kube-api-access-4pfnz" (OuterVolumeSpecName: "kube-api-access-4pfnz") pod "b5c282f5-bbd2-47b4-86ad-6a6b19de890b" (UID: "b5c282f5-bbd2-47b4-86ad-6a6b19de890b"). InnerVolumeSpecName "kube-api-access-4pfnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:42:04 crc kubenswrapper[5029]: I0313 21:42:04.999565 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pfnz\" (UniqueName: \"kubernetes.io/projected/b5c282f5-bbd2-47b4-86ad-6a6b19de890b-kube-api-access-4pfnz\") on node \"crc\" DevicePath \"\"" Mar 13 21:42:05 crc kubenswrapper[5029]: I0313 21:42:05.134596 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557302-zqhf8" event={"ID":"b5c282f5-bbd2-47b4-86ad-6a6b19de890b","Type":"ContainerDied","Data":"83a12f3d86184673ac3602735ab84f2196966afcf6089be8d1a5391f43bc9d14"} Mar 13 21:42:05 crc kubenswrapper[5029]: I0313 21:42:05.134671 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83a12f3d86184673ac3602735ab84f2196966afcf6089be8d1a5391f43bc9d14" Mar 13 21:42:05 crc kubenswrapper[5029]: I0313 21:42:05.134692 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557302-zqhf8" Mar 13 21:42:05 crc kubenswrapper[5029]: I0313 21:42:05.797036 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557296-mjnv8"] Mar 13 21:42:05 crc kubenswrapper[5029]: I0313 21:42:05.809593 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557296-mjnv8"] Mar 13 21:42:06 crc kubenswrapper[5029]: I0313 21:42:06.613488 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49270907-f742-43aa-866e-a83f5eea76fd" path="/var/lib/kubelet/pods/49270907-f742-43aa-866e-a83f5eea76fd/volumes" Mar 13 21:42:15 crc kubenswrapper[5029]: I0313 21:42:15.173857 5029 scope.go:117] "RemoveContainer" containerID="11e1be8314f98cc31d2182e3907ea0d9828aa3f5568ea05ac2501e2273fe2c26" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.092765 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nwzzc"] Mar 13 21:43:49 crc kubenswrapper[5029]: E0313 21:43:49.095658 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c282f5-bbd2-47b4-86ad-6a6b19de890b" containerName="oc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.099078 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c282f5-bbd2-47b4-86ad-6a6b19de890b" containerName="oc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.099596 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c282f5-bbd2-47b4-86ad-6a6b19de890b" containerName="oc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.101367 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.119754 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwzzc"] Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.155309 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49gxt\" (UniqueName: \"kubernetes.io/projected/570a073a-cded-4053-9b43-5e0b43e657b2-kube-api-access-49gxt\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.155449 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-utilities\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.155537 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-catalog-content\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.257834 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-catalog-content\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.258013 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49gxt\" (UniqueName: \"kubernetes.io/projected/570a073a-cded-4053-9b43-5e0b43e657b2-kube-api-access-49gxt\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.258097 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-utilities\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.258775 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-utilities\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.259008 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-catalog-content\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.306224 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49gxt\" (UniqueName: \"kubernetes.io/projected/570a073a-cded-4053-9b43-5e0b43e657b2-kube-api-access-49gxt\") pod \"certified-operators-nwzzc\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:49 crc kubenswrapper[5029]: I0313 21:43:49.423681 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:50 crc kubenswrapper[5029]: I0313 21:43:50.028315 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwzzc"] Mar 13 21:43:51 crc kubenswrapper[5029]: I0313 21:43:51.354969 5029 generic.go:334] "Generic (PLEG): container finished" podID="570a073a-cded-4053-9b43-5e0b43e657b2" containerID="35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217" exitCode=0 Mar 13 21:43:51 crc kubenswrapper[5029]: I0313 21:43:51.355089 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwzzc" event={"ID":"570a073a-cded-4053-9b43-5e0b43e657b2","Type":"ContainerDied","Data":"35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217"} Mar 13 21:43:51 crc kubenswrapper[5029]: I0313 21:43:51.355424 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwzzc" event={"ID":"570a073a-cded-4053-9b43-5e0b43e657b2","Type":"ContainerStarted","Data":"eeea0d58a73d6401d6bbfaaff025af28a7757c1492cf11da1e680f7a9be4a81a"} Mar 13 21:43:52 crc kubenswrapper[5029]: I0313 21:43:52.368250 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwzzc" event={"ID":"570a073a-cded-4053-9b43-5e0b43e657b2","Type":"ContainerStarted","Data":"6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b"} Mar 13 21:43:54 crc kubenswrapper[5029]: I0313 21:43:54.417609 5029 generic.go:334] "Generic (PLEG): container finished" podID="570a073a-cded-4053-9b43-5e0b43e657b2" containerID="6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b" exitCode=0 Mar 13 21:43:54 crc kubenswrapper[5029]: I0313 21:43:54.417713 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwzzc" event={"ID":"570a073a-cded-4053-9b43-5e0b43e657b2","Type":"ContainerDied","Data":"6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b"} Mar 13 21:43:55 crc kubenswrapper[5029]: I0313 21:43:55.434047 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwzzc" event={"ID":"570a073a-cded-4053-9b43-5e0b43e657b2","Type":"ContainerStarted","Data":"2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746"} Mar 13 21:43:55 crc kubenswrapper[5029]: I0313 21:43:55.472521 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nwzzc" podStartSLOduration=3.001925032 podStartE2EDuration="6.472488499s" podCreationTimestamp="2026-03-13 21:43:49 +0000 UTC" firstStartedPulling="2026-03-13 21:43:51.361247499 +0000 UTC m=+4591.377329922" lastFinishedPulling="2026-03-13 21:43:54.831810976 +0000 UTC m=+4594.847893389" observedRunningTime="2026-03-13 21:43:55.463043941 +0000 UTC m=+4595.479126354" watchObservedRunningTime="2026-03-13 21:43:55.472488499 +0000 UTC m=+4595.488570902" Mar 13 21:43:59 crc kubenswrapper[5029]: I0313 21:43:59.423921 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:59 crc kubenswrapper[5029]: I0313 21:43:59.424625 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:43:59 crc kubenswrapper[5029]: I0313 21:43:59.478034 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.155967 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557304-24tk6"] Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.158607 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557304-24tk6" Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.162931 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.162931 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.163509 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.168822 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557304-24tk6"] Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.278384 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwf8\" (UniqueName: \"kubernetes.io/projected/d52f542f-cf94-47a0-af9c-d7755bb6a3a8-kube-api-access-hvwf8\") pod \"auto-csr-approver-29557304-24tk6\" (UID: \"d52f542f-cf94-47a0-af9c-d7755bb6a3a8\") " pod="openshift-infra/auto-csr-approver-29557304-24tk6" Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.381586 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwf8\" (UniqueName: \"kubernetes.io/projected/d52f542f-cf94-47a0-af9c-d7755bb6a3a8-kube-api-access-hvwf8\") pod \"auto-csr-approver-29557304-24tk6\" (UID: \"d52f542f-cf94-47a0-af9c-d7755bb6a3a8\") " pod="openshift-infra/auto-csr-approver-29557304-24tk6" Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.408629 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwf8\" (UniqueName: \"kubernetes.io/projected/d52f542f-cf94-47a0-af9c-d7755bb6a3a8-kube-api-access-hvwf8\") pod \"auto-csr-approver-29557304-24tk6\" (UID: \"d52f542f-cf94-47a0-af9c-d7755bb6a3a8\") " pod="openshift-infra/auto-csr-approver-29557304-24tk6" Mar 13 21:44:00 crc kubenswrapper[5029]: I0313 21:44:00.483163 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557304-24tk6" Mar 13 21:44:01 crc kubenswrapper[5029]: I0313 21:44:01.078023 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557304-24tk6"] Mar 13 21:44:01 crc kubenswrapper[5029]: I0313 21:44:01.543400 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557304-24tk6" event={"ID":"d52f542f-cf94-47a0-af9c-d7755bb6a3a8","Type":"ContainerStarted","Data":"5545418544ea9eeac4914afe69842afdd5c30a735764d2557eab25ab2fbfe33d"} Mar 13 21:44:01 crc kubenswrapper[5029]: I0313 21:44:01.950272 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:44:01 crc kubenswrapper[5029]: I0313 21:44:01.950344 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:44:02 crc kubenswrapper[5029]: I0313 21:44:02.572122 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557304-24tk6" event={"ID":"d52f542f-cf94-47a0-af9c-d7755bb6a3a8","Type":"ContainerStarted","Data":"167749d69edaba744f1243043fd19aeff09bcd7a4a91a37d57536bb4ea53d9b0"} Mar 13 21:44:02 crc kubenswrapper[5029]: I0313 21:44:02.662321 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557304-24tk6" podStartSLOduration=1.667952603 podStartE2EDuration="2.662289561s" podCreationTimestamp="2026-03-13 21:44:00 +0000 UTC" firstStartedPulling="2026-03-13 21:44:01.089378793 +0000 UTC m=+4601.105461196" lastFinishedPulling="2026-03-13 21:44:02.083715751 +0000 UTC m=+4602.099798154" observedRunningTime="2026-03-13 21:44:02.628840044 +0000 UTC m=+4602.644922447" watchObservedRunningTime="2026-03-13 21:44:02.662289561 +0000 UTC m=+4602.678371964" Mar 13 21:44:03 crc kubenswrapper[5029]: I0313 21:44:03.585345 5029 generic.go:334] "Generic (PLEG): container finished" podID="d52f542f-cf94-47a0-af9c-d7755bb6a3a8" containerID="167749d69edaba744f1243043fd19aeff09bcd7a4a91a37d57536bb4ea53d9b0" exitCode=0 Mar 13 21:44:03 crc kubenswrapper[5029]: I0313 21:44:03.585574 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557304-24tk6" event={"ID":"d52f542f-cf94-47a0-af9c-d7755bb6a3a8","Type":"ContainerDied","Data":"167749d69edaba744f1243043fd19aeff09bcd7a4a91a37d57536bb4ea53d9b0"} Mar 13 21:44:05 crc kubenswrapper[5029]: I0313 21:44:05.207870 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557304-24tk6" Mar 13 21:44:05 crc kubenswrapper[5029]: I0313 21:44:05.345006 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvwf8\" (UniqueName: \"kubernetes.io/projected/d52f542f-cf94-47a0-af9c-d7755bb6a3a8-kube-api-access-hvwf8\") pod \"d52f542f-cf94-47a0-af9c-d7755bb6a3a8\" (UID: \"d52f542f-cf94-47a0-af9c-d7755bb6a3a8\") " Mar 13 21:44:05 crc kubenswrapper[5029]: I0313 21:44:05.358400 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52f542f-cf94-47a0-af9c-d7755bb6a3a8-kube-api-access-hvwf8" (OuterVolumeSpecName: "kube-api-access-hvwf8") pod "d52f542f-cf94-47a0-af9c-d7755bb6a3a8" (UID: "d52f542f-cf94-47a0-af9c-d7755bb6a3a8"). InnerVolumeSpecName "kube-api-access-hvwf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:44:05 crc kubenswrapper[5029]: I0313 21:44:05.448838 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvwf8\" (UniqueName: \"kubernetes.io/projected/d52f542f-cf94-47a0-af9c-d7755bb6a3a8-kube-api-access-hvwf8\") on node \"crc\" DevicePath \"\"" Mar 13 21:44:05 crc kubenswrapper[5029]: I0313 21:44:05.614105 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557304-24tk6" event={"ID":"d52f542f-cf94-47a0-af9c-d7755bb6a3a8","Type":"ContainerDied","Data":"5545418544ea9eeac4914afe69842afdd5c30a735764d2557eab25ab2fbfe33d"} Mar 13 21:44:05 crc kubenswrapper[5029]: I0313 21:44:05.614150 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557304-24tk6" Mar 13 21:44:05 crc kubenswrapper[5029]: I0313 21:44:05.614166 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5545418544ea9eeac4914afe69842afdd5c30a735764d2557eab25ab2fbfe33d" Mar 13 21:44:06 crc kubenswrapper[5029]: I0313 21:44:06.301783 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557298-q7jm9"] Mar 13 21:44:06 crc kubenswrapper[5029]: I0313 21:44:06.313437 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557298-q7jm9"] Mar 13 21:44:06 crc kubenswrapper[5029]: I0313 21:44:06.621697 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff4b0cf-2750-4227-8d11-f80a1576568b" path="/var/lib/kubelet/pods/2ff4b0cf-2750-4227-8d11-f80a1576568b/volumes" Mar 13 21:44:09 crc kubenswrapper[5029]: I0313 21:44:09.475356 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:44:09 crc kubenswrapper[5029]: I0313 21:44:09.551272 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nwzzc"] Mar 13 21:44:09 crc kubenswrapper[5029]: I0313 21:44:09.696012 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nwzzc" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" containerName="registry-server" containerID="cri-o://2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746" gracePeriod=2 Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.329189 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.503216 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49gxt\" (UniqueName: \"kubernetes.io/projected/570a073a-cded-4053-9b43-5e0b43e657b2-kube-api-access-49gxt\") pod \"570a073a-cded-4053-9b43-5e0b43e657b2\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.504219 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-catalog-content\") pod \"570a073a-cded-4053-9b43-5e0b43e657b2\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.504336 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-utilities\") pod \"570a073a-cded-4053-9b43-5e0b43e657b2\" (UID: \"570a073a-cded-4053-9b43-5e0b43e657b2\") " Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.506839 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-utilities" (OuterVolumeSpecName: "utilities") pod "570a073a-cded-4053-9b43-5e0b43e657b2" (UID: "570a073a-cded-4053-9b43-5e0b43e657b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.517131 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570a073a-cded-4053-9b43-5e0b43e657b2-kube-api-access-49gxt" (OuterVolumeSpecName: "kube-api-access-49gxt") pod "570a073a-cded-4053-9b43-5e0b43e657b2" (UID: "570a073a-cded-4053-9b43-5e0b43e657b2"). InnerVolumeSpecName "kube-api-access-49gxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.579654 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "570a073a-cded-4053-9b43-5e0b43e657b2" (UID: "570a073a-cded-4053-9b43-5e0b43e657b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.608087 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.608143 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/570a073a-cded-4053-9b43-5e0b43e657b2-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.608161 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49gxt\" (UniqueName: \"kubernetes.io/projected/570a073a-cded-4053-9b43-5e0b43e657b2-kube-api-access-49gxt\") on node \"crc\" DevicePath \"\"" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.713962 5029 generic.go:334] "Generic (PLEG): container finished" podID="570a073a-cded-4053-9b43-5e0b43e657b2" containerID="2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746" exitCode=0 Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.714029 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwzzc" event={"ID":"570a073a-cded-4053-9b43-5e0b43e657b2","Type":"ContainerDied","Data":"2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746"} Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.714062 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwzzc" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.714102 5029 scope.go:117] "RemoveContainer" containerID="2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.714084 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwzzc" event={"ID":"570a073a-cded-4053-9b43-5e0b43e657b2","Type":"ContainerDied","Data":"eeea0d58a73d6401d6bbfaaff025af28a7757c1492cf11da1e680f7a9be4a81a"} Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.749349 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nwzzc"] Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.755944 5029 scope.go:117] "RemoveContainer" containerID="6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.761854 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nwzzc"] Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.792313 5029 scope.go:117] "RemoveContainer" containerID="35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.842292 5029 scope.go:117] "RemoveContainer" containerID="2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746" Mar 13 21:44:10 crc kubenswrapper[5029]: E0313 21:44:10.843010 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746\": container with ID starting with 2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746 not found: ID does not exist" containerID="2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.843085 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746"} err="failed to get container status \"2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746\": rpc error: code = NotFound desc = could not find container \"2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746\": container with ID starting with 2249f15edf1e32f939c5186fa745bee54feccc7007f78654359dde26b8b58746 not found: ID does not exist" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.843124 5029 scope.go:117] "RemoveContainer" containerID="6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b" Mar 13 21:44:10 crc kubenswrapper[5029]: E0313 21:44:10.845736 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b\": container with ID starting with 6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b not found: ID does not exist" containerID="6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.845787 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b"} err="failed to get container status \"6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b\": rpc error: code = NotFound desc = could not find container \"6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b\": container with ID starting with 6c476dccbaa57f4ce0a8d7101dd57cd2765a8f2272cea77e1591b65704e0f36b not found: ID does not exist" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.845823 5029 scope.go:117] "RemoveContainer" containerID="35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217" Mar 13 21:44:10 crc kubenswrapper[5029]: E0313 21:44:10.846314 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217\": container with ID starting with 35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217 not found: ID does not exist" containerID="35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217" Mar 13 21:44:10 crc kubenswrapper[5029]: I0313 21:44:10.846374 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217"} err="failed to get container status \"35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217\": rpc error: code = NotFound desc = could not find container \"35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217\": container with ID starting with 35dfbe03a5adc06f5aae823ab0c70793ec58808ac4a4043994ea7a67aa34d217 not found: ID does not exist" Mar 13 21:44:12 crc kubenswrapper[5029]: I0313 21:44:12.614014 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" path="/var/lib/kubelet/pods/570a073a-cded-4053-9b43-5e0b43e657b2/volumes" Mar 13 21:44:15 crc kubenswrapper[5029]: I0313 21:44:15.290642 5029 scope.go:117] "RemoveContainer" containerID="69cd77a7565abaaf2e77c67407dc46aae581964de76d681401dcc9b49aefbb9c" Mar 13 21:44:20 crc kubenswrapper[5029]: I0313 21:44:20.774527 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="97961996-b234-441c-ba7c-2c479dfae7f4" containerName="galera" probeResult="failure" output="command timed out" Mar 13 21:44:20 crc kubenswrapper[5029]: I0313 21:44:20.778589 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="97961996-b234-441c-ba7c-2c479dfae7f4" containerName="galera" probeResult="failure" output="command timed out" Mar 13 21:44:31 crc kubenswrapper[5029]: I0313 21:44:31.953589 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:44:31 crc kubenswrapper[5029]: I0313 21:44:31.954330 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.161728 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89"] Mar 13 21:45:00 crc kubenswrapper[5029]: E0313 21:45:00.162814 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52f542f-cf94-47a0-af9c-d7755bb6a3a8" containerName="oc" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.162831 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52f542f-cf94-47a0-af9c-d7755bb6a3a8" containerName="oc" Mar 13 21:45:00 crc kubenswrapper[5029]: E0313 21:45:00.162961 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" containerName="extract-utilities" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.162972 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" containerName="extract-utilities" Mar 13 21:45:00 crc kubenswrapper[5029]: E0313 21:45:00.162994 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" containerName="registry-server" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.163001 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" containerName="registry-server" Mar 13 21:45:00 crc kubenswrapper[5029]: E0313 21:45:00.163009 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" containerName="extract-content" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.163017 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" containerName="extract-content" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.163230 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52f542f-cf94-47a0-af9c-d7755bb6a3a8" containerName="oc" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.163245 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="570a073a-cded-4053-9b43-5e0b43e657b2" containerName="registry-server" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.163979 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.166279 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.166617 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.177560 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89"] Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.237111 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-secret-volume\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.237260 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8g2\" (UniqueName: \"kubernetes.io/projected/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-kube-api-access-mb8g2\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.237297 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-config-volume\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.339843 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb8g2\" (UniqueName: \"kubernetes.io/projected/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-kube-api-access-mb8g2\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.339960 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-config-volume\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.340053 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-secret-volume\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.341134 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-config-volume\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.349984 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-secret-volume\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.358426 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb8g2\" (UniqueName: \"kubernetes.io/projected/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-kube-api-access-mb8g2\") pod \"collect-profiles-29557305-vfr89\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:00 crc kubenswrapper[5029]: I0313 21:45:00.504558 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:01 crc kubenswrapper[5029]: I0313 21:45:01.118778 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89"] Mar 13 21:45:01 crc kubenswrapper[5029]: I0313 21:45:01.276516 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" event={"ID":"01a2e4fd-c32d-4d78-87bb-d4f8e830016a","Type":"ContainerStarted","Data":"63c070c9344882295644f7182efa08b5cb994f99ca702975f325d36248a6a444"} Mar 13 21:45:01 crc kubenswrapper[5029]: I0313 21:45:01.950190 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:45:01 crc kubenswrapper[5029]: I0313 21:45:01.950816 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:45:01 crc kubenswrapper[5029]: I0313 21:45:01.950924 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:45:01 crc kubenswrapper[5029]: I0313 21:45:01.952215 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:45:01 crc kubenswrapper[5029]: I0313 21:45:01.952293 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" gracePeriod=600 Mar 13 21:45:02 crc kubenswrapper[5029]: E0313 21:45:02.086255 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:45:02 crc kubenswrapper[5029]: I0313 21:45:02.289625 5029 generic.go:334] "Generic (PLEG): container finished" podID="01a2e4fd-c32d-4d78-87bb-d4f8e830016a" containerID="c8ba036ec1c0763c7a2ae4a7bf3de6b7ac694a74b50693f799372161d242e38c" exitCode=0 Mar 13 21:45:02 crc kubenswrapper[5029]: I0313 21:45:02.289732 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" event={"ID":"01a2e4fd-c32d-4d78-87bb-d4f8e830016a","Type":"ContainerDied","Data":"c8ba036ec1c0763c7a2ae4a7bf3de6b7ac694a74b50693f799372161d242e38c"} Mar 13 21:45:02 crc kubenswrapper[5029]: I0313 21:45:02.292546 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" exitCode=0 Mar 13 21:45:02 crc kubenswrapper[5029]: I0313 21:45:02.292615 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53"} Mar 13 21:45:02 crc kubenswrapper[5029]: I0313 21:45:02.292656 5029 scope.go:117] "RemoveContainer" containerID="449c79024f27b1eca0c3dd6e13388b325187db7167c402eecbe9ac1b3ab04370" Mar 13 21:45:02 crc kubenswrapper[5029]: I0313 21:45:02.293896 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:45:02 crc kubenswrapper[5029]: E0313 21:45:02.294398 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:45:03 crc kubenswrapper[5029]: I0313 21:45:03.809629 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:03 crc kubenswrapper[5029]: I0313 21:45:03.933079 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-config-volume\") pod \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " Mar 13 21:45:03 crc kubenswrapper[5029]: I0313 21:45:03.933292 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb8g2\" (UniqueName: \"kubernetes.io/projected/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-kube-api-access-mb8g2\") pod \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " Mar 13 21:45:03 crc kubenswrapper[5029]: I0313 21:45:03.933314 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-secret-volume\") pod \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\" (UID: \"01a2e4fd-c32d-4d78-87bb-d4f8e830016a\") " Mar 13 21:45:03 crc kubenswrapper[5029]: I0313 21:45:03.934097 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-config-volume" (OuterVolumeSpecName: "config-volume") pod "01a2e4fd-c32d-4d78-87bb-d4f8e830016a" (UID: "01a2e4fd-c32d-4d78-87bb-d4f8e830016a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 21:45:03 crc kubenswrapper[5029]: I0313 21:45:03.940433 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "01a2e4fd-c32d-4d78-87bb-d4f8e830016a" (UID: "01a2e4fd-c32d-4d78-87bb-d4f8e830016a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:45:03 crc kubenswrapper[5029]: I0313 21:45:03.940723 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-kube-api-access-mb8g2" (OuterVolumeSpecName: "kube-api-access-mb8g2") pod "01a2e4fd-c32d-4d78-87bb-d4f8e830016a" (UID: "01a2e4fd-c32d-4d78-87bb-d4f8e830016a"). InnerVolumeSpecName "kube-api-access-mb8g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:45:04 crc kubenswrapper[5029]: I0313 21:45:04.036242 5029 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 21:45:04 crc kubenswrapper[5029]: I0313 21:45:04.036282 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb8g2\" (UniqueName: \"kubernetes.io/projected/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-kube-api-access-mb8g2\") on node \"crc\" DevicePath \"\"" Mar 13 21:45:04 crc kubenswrapper[5029]: I0313 21:45:04.036293 5029 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a2e4fd-c32d-4d78-87bb-d4f8e830016a-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 21:45:04 crc kubenswrapper[5029]: I0313 21:45:04.320150 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" Mar 13 21:45:04 crc kubenswrapper[5029]: I0313 21:45:04.320158 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557305-vfr89" event={"ID":"01a2e4fd-c32d-4d78-87bb-d4f8e830016a","Type":"ContainerDied","Data":"63c070c9344882295644f7182efa08b5cb994f99ca702975f325d36248a6a444"} Mar 13 21:45:04 crc kubenswrapper[5029]: I0313 21:45:04.320585 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63c070c9344882295644f7182efa08b5cb994f99ca702975f325d36248a6a444" Mar 13 21:45:04 crc kubenswrapper[5029]: I0313 21:45:04.905484 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk"] Mar 13 21:45:04 crc kubenswrapper[5029]: I0313 21:45:04.917894 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557260-8hvkk"] Mar 13 21:45:06 crc kubenswrapper[5029]: I0313 21:45:06.613409 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51932f39-1baa-4d43-98ee-a58dccb6251b" path="/var/lib/kubelet/pods/51932f39-1baa-4d43-98ee-a58dccb6251b/volumes" Mar 13 21:45:15 crc kubenswrapper[5029]: I0313 21:45:15.446917 5029 scope.go:117] "RemoveContainer" containerID="711e01804067f2047c1e8065aa0435b751ea6191ff7b9ebd089313de9b032b2e" Mar 13 21:45:17 crc kubenswrapper[5029]: I0313 21:45:17.600350 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:45:17 crc kubenswrapper[5029]: E0313 21:45:17.601536 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:45:30 crc kubenswrapper[5029]: I0313 21:45:30.602321 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:45:30 crc kubenswrapper[5029]: E0313 21:45:30.603921 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:45:45 crc kubenswrapper[5029]: I0313 21:45:45.600415 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:45:45 crc kubenswrapper[5029]: E0313 21:45:45.601812 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.154272 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557306-xjsnh"] Mar 13 21:46:00 crc kubenswrapper[5029]: E0313 21:46:00.156017 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a2e4fd-c32d-4d78-87bb-d4f8e830016a" containerName="collect-profiles" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.156036 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a2e4fd-c32d-4d78-87bb-d4f8e830016a" containerName="collect-profiles" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.156383 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a2e4fd-c32d-4d78-87bb-d4f8e830016a" containerName="collect-profiles" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.157489 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557306-xjsnh" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.160664 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.160689 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.163234 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.169024 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557306-xjsnh"] Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.213365 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lskj9\" (UniqueName: \"kubernetes.io/projected/1cf37533-cbe3-48d8-999b-26aca8696d76-kube-api-access-lskj9\") pod \"auto-csr-approver-29557306-xjsnh\" (UID: \"1cf37533-cbe3-48d8-999b-26aca8696d76\") " pod="openshift-infra/auto-csr-approver-29557306-xjsnh" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.317362 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lskj9\" (UniqueName: \"kubernetes.io/projected/1cf37533-cbe3-48d8-999b-26aca8696d76-kube-api-access-lskj9\") pod \"auto-csr-approver-29557306-xjsnh\" (UID: \"1cf37533-cbe3-48d8-999b-26aca8696d76\") " pod="openshift-infra/auto-csr-approver-29557306-xjsnh" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.344649 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lskj9\" (UniqueName: \"kubernetes.io/projected/1cf37533-cbe3-48d8-999b-26aca8696d76-kube-api-access-lskj9\") pod \"auto-csr-approver-29557306-xjsnh\" (UID: \"1cf37533-cbe3-48d8-999b-26aca8696d76\") " pod="openshift-infra/auto-csr-approver-29557306-xjsnh" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.484056 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557306-xjsnh" Mar 13 21:46:00 crc kubenswrapper[5029]: I0313 21:46:00.606771 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:46:00 crc kubenswrapper[5029]: E0313 21:46:00.607501 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:46:01 crc kubenswrapper[5029]: I0313 21:46:01.056580 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:46:01 crc kubenswrapper[5029]: I0313 21:46:01.065712 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557306-xjsnh"] Mar 13 21:46:01 crc kubenswrapper[5029]: I0313 21:46:01.941640 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557306-xjsnh" event={"ID":"1cf37533-cbe3-48d8-999b-26aca8696d76","Type":"ContainerStarted","Data":"f74a8b41978f4af0416413ccc02202f9d2dd86e657512289c581fe1d444dd4a8"} Mar 13 21:46:02 crc kubenswrapper[5029]: I0313 21:46:02.953326 5029 generic.go:334] "Generic (PLEG): container finished" podID="1cf37533-cbe3-48d8-999b-26aca8696d76" containerID="3b6c8442a563f4f94aa95d83b2c2f2f061fe0fb7b60c8298180defdff87ed99f" exitCode=0 Mar 13 21:46:02 crc kubenswrapper[5029]: I0313 21:46:02.953429 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557306-xjsnh" event={"ID":"1cf37533-cbe3-48d8-999b-26aca8696d76","Type":"ContainerDied","Data":"3b6c8442a563f4f94aa95d83b2c2f2f061fe0fb7b60c8298180defdff87ed99f"} Mar 13 21:46:04 crc kubenswrapper[5029]: I0313 21:46:04.399787 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557306-xjsnh" Mar 13 21:46:04 crc kubenswrapper[5029]: I0313 21:46:04.523109 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lskj9\" (UniqueName: \"kubernetes.io/projected/1cf37533-cbe3-48d8-999b-26aca8696d76-kube-api-access-lskj9\") pod \"1cf37533-cbe3-48d8-999b-26aca8696d76\" (UID: \"1cf37533-cbe3-48d8-999b-26aca8696d76\") " Mar 13 21:46:04 crc kubenswrapper[5029]: I0313 21:46:04.532407 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf37533-cbe3-48d8-999b-26aca8696d76-kube-api-access-lskj9" (OuterVolumeSpecName: "kube-api-access-lskj9") pod "1cf37533-cbe3-48d8-999b-26aca8696d76" (UID: "1cf37533-cbe3-48d8-999b-26aca8696d76"). InnerVolumeSpecName "kube-api-access-lskj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:46:04 crc kubenswrapper[5029]: I0313 21:46:04.626166 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lskj9\" (UniqueName: \"kubernetes.io/projected/1cf37533-cbe3-48d8-999b-26aca8696d76-kube-api-access-lskj9\") on node \"crc\" DevicePath \"\"" Mar 13 21:46:04 crc kubenswrapper[5029]: I0313 21:46:04.978894 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557306-xjsnh" event={"ID":"1cf37533-cbe3-48d8-999b-26aca8696d76","Type":"ContainerDied","Data":"f74a8b41978f4af0416413ccc02202f9d2dd86e657512289c581fe1d444dd4a8"} Mar 13 21:46:04 crc kubenswrapper[5029]: I0313 21:46:04.979315 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74a8b41978f4af0416413ccc02202f9d2dd86e657512289c581fe1d444dd4a8" Mar 13 21:46:04 crc kubenswrapper[5029]: I0313 21:46:04.979007 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557306-xjsnh" Mar 13 21:46:05 crc kubenswrapper[5029]: I0313 21:46:05.485057 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557300-fcv76"] Mar 13 21:46:05 crc kubenswrapper[5029]: I0313 21:46:05.494760 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557300-fcv76"] Mar 13 21:46:06 crc kubenswrapper[5029]: I0313 21:46:06.617050 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="142238c8-8b74-4ec9-a06b-ac4acef624fe" path="/var/lib/kubelet/pods/142238c8-8b74-4ec9-a06b-ac4acef624fe/volumes" Mar 13 21:46:15 crc kubenswrapper[5029]: I0313 21:46:15.542576 5029 scope.go:117] "RemoveContainer" containerID="6c8279a99fff2db79d3a69d6b7e6cac6d9e7628bbb3e7eb24cb0f12aa9f04161" Mar 13 21:46:15 crc kubenswrapper[5029]: I0313 21:46:15.573457 5029 scope.go:117] "RemoveContainer" containerID="9f136431d0f0707d0967e8e0995e87ba05c78c5e23d1c13845129d6ba3ccdfd8" Mar 13 21:46:15 crc kubenswrapper[5029]: I0313 21:46:15.600177 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:46:15 crc kubenswrapper[5029]: E0313 21:46:15.601311 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:46:15 crc kubenswrapper[5029]: I0313 21:46:15.648523 5029 scope.go:117] "RemoveContainer" containerID="9ea8fdcb12dea2ec5c0950f5b2eee5f006a52611aef205e8f6f66c1306682da5" Mar 13 21:46:15 crc kubenswrapper[5029]: I0313 21:46:15.697182 5029 scope.go:117] "RemoveContainer" containerID="7498f15d3ef96c38dd2c5c1fbbfbbd54db9b9c88936ec6317023d791a6ce3379" Mar 13 21:46:27 crc kubenswrapper[5029]: I0313 21:46:27.599817 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:46:27 crc kubenswrapper[5029]: E0313 21:46:27.600672 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:46:41 crc kubenswrapper[5029]: I0313 21:46:41.600048 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:46:41 crc kubenswrapper[5029]: E0313 21:46:41.601064 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:46:56 crc kubenswrapper[5029]: I0313 21:46:56.599967 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:46:56 crc kubenswrapper[5029]: E0313 21:46:56.601259 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:47:07 crc kubenswrapper[5029]: I0313 21:47:07.599642 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:47:07 crc kubenswrapper[5029]: E0313 21:47:07.601332 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:47:22 crc kubenswrapper[5029]: I0313 21:47:22.600310 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:47:22 crc kubenswrapper[5029]: E0313 21:47:22.601383 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:47:33 crc kubenswrapper[5029]: I0313 21:47:33.599520 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:47:33 crc kubenswrapper[5029]: E0313 21:47:33.600944 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:47:47 crc kubenswrapper[5029]: I0313 21:47:47.602747 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:47:47 crc kubenswrapper[5029]: E0313 21:47:47.603967 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.149649 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557308-g98jn"] Mar 13 21:48:00 crc kubenswrapper[5029]: E0313 21:48:00.150952 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf37533-cbe3-48d8-999b-26aca8696d76" containerName="oc" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.150969 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf37533-cbe3-48d8-999b-26aca8696d76" containerName="oc" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.151239 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf37533-cbe3-48d8-999b-26aca8696d76" containerName="oc" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.152090 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557308-g98jn" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.155144 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.155381 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.157483 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.183419 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhc5\" (UniqueName: \"kubernetes.io/projected/574a9994-4a63-44ff-98d5-dc305aba3cbf-kube-api-access-9mhc5\") pod \"auto-csr-approver-29557308-g98jn\" (UID: \"574a9994-4a63-44ff-98d5-dc305aba3cbf\") " pod="openshift-infra/auto-csr-approver-29557308-g98jn" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.199736 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557308-g98jn"] Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.286017 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhc5\" (UniqueName: \"kubernetes.io/projected/574a9994-4a63-44ff-98d5-dc305aba3cbf-kube-api-access-9mhc5\") pod \"auto-csr-approver-29557308-g98jn\" (UID: \"574a9994-4a63-44ff-98d5-dc305aba3cbf\") " pod="openshift-infra/auto-csr-approver-29557308-g98jn" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.310204 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhc5\" (UniqueName: \"kubernetes.io/projected/574a9994-4a63-44ff-98d5-dc305aba3cbf-kube-api-access-9mhc5\") pod \"auto-csr-approver-29557308-g98jn\" (UID: \"574a9994-4a63-44ff-98d5-dc305aba3cbf\") " pod="openshift-infra/auto-csr-approver-29557308-g98jn" Mar 13 21:48:00 crc kubenswrapper[5029]: I0313 21:48:00.474264 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557308-g98jn" Mar 13 21:48:01 crc kubenswrapper[5029]: I0313 21:48:01.038668 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557308-g98jn"] Mar 13 21:48:01 crc kubenswrapper[5029]: I0313 21:48:01.193517 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557308-g98jn" event={"ID":"574a9994-4a63-44ff-98d5-dc305aba3cbf","Type":"ContainerStarted","Data":"e7e9c3c0fe2fe6724cf016cc9dacbf4ad3e8a91356e43440f1f9daeaf578efd8"} Mar 13 21:48:02 crc kubenswrapper[5029]: I0313 21:48:02.600280 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:48:02 crc kubenswrapper[5029]: E0313 21:48:02.601275 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:48:03 crc kubenswrapper[5029]: I0313 21:48:03.220698 5029 generic.go:334] "Generic (PLEG): container finished" podID="574a9994-4a63-44ff-98d5-dc305aba3cbf" containerID="def0ddbfca733486a764e957fd716b5d75c94b70ec041aa8f46f38b44123e949" exitCode=0 Mar 13 21:48:03 crc kubenswrapper[5029]: I0313 21:48:03.220788 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557308-g98jn" event={"ID":"574a9994-4a63-44ff-98d5-dc305aba3cbf","Type":"ContainerDied","Data":"def0ddbfca733486a764e957fd716b5d75c94b70ec041aa8f46f38b44123e949"} Mar 13 21:48:04 crc kubenswrapper[5029]: I0313 21:48:04.640159 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557308-g98jn" Mar 13 21:48:04 crc kubenswrapper[5029]: I0313 21:48:04.816642 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mhc5\" (UniqueName: \"kubernetes.io/projected/574a9994-4a63-44ff-98d5-dc305aba3cbf-kube-api-access-9mhc5\") pod \"574a9994-4a63-44ff-98d5-dc305aba3cbf\" (UID: \"574a9994-4a63-44ff-98d5-dc305aba3cbf\") " Mar 13 21:48:04 crc kubenswrapper[5029]: I0313 21:48:04.823167 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574a9994-4a63-44ff-98d5-dc305aba3cbf-kube-api-access-9mhc5" (OuterVolumeSpecName: "kube-api-access-9mhc5") pod "574a9994-4a63-44ff-98d5-dc305aba3cbf" (UID: "574a9994-4a63-44ff-98d5-dc305aba3cbf"). InnerVolumeSpecName "kube-api-access-9mhc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:48:04 crc kubenswrapper[5029]: I0313 21:48:04.920694 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mhc5\" (UniqueName: \"kubernetes.io/projected/574a9994-4a63-44ff-98d5-dc305aba3cbf-kube-api-access-9mhc5\") on node \"crc\" DevicePath \"\"" Mar 13 21:48:05 crc kubenswrapper[5029]: I0313 21:48:05.244707 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557308-g98jn" event={"ID":"574a9994-4a63-44ff-98d5-dc305aba3cbf","Type":"ContainerDied","Data":"e7e9c3c0fe2fe6724cf016cc9dacbf4ad3e8a91356e43440f1f9daeaf578efd8"} Mar 13 21:48:05 crc kubenswrapper[5029]: I0313 21:48:05.244762 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7e9c3c0fe2fe6724cf016cc9dacbf4ad3e8a91356e43440f1f9daeaf578efd8" Mar 13 21:48:05 crc kubenswrapper[5029]: I0313 21:48:05.244785 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557308-g98jn" Mar 13 21:48:05 crc kubenswrapper[5029]: I0313 21:48:05.715657 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557302-zqhf8"] Mar 13 21:48:05 crc kubenswrapper[5029]: I0313 21:48:05.725914 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557302-zqhf8"] Mar 13 21:48:06 crc kubenswrapper[5029]: I0313 21:48:06.615581 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c282f5-bbd2-47b4-86ad-6a6b19de890b" path="/var/lib/kubelet/pods/b5c282f5-bbd2-47b4-86ad-6a6b19de890b/volumes" Mar 13 21:48:15 crc kubenswrapper[5029]: I0313 21:48:15.601033 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:48:15 crc kubenswrapper[5029]: E0313 21:48:15.602476 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:48:15 crc kubenswrapper[5029]: I0313 21:48:15.847311 5029 scope.go:117] "RemoveContainer" containerID="8d117b8feb68a44e5429586c6614ad55d381a01ed9e3e7167ad25da3366550ea" Mar 13 21:48:30 crc kubenswrapper[5029]: I0313 21:48:30.608520 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:48:30 crc kubenswrapper[5029]: E0313 21:48:30.609900 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:48:41 crc kubenswrapper[5029]: I0313 21:48:41.599702 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:48:41 crc kubenswrapper[5029]: E0313 21:48:41.601261 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:48:52 crc kubenswrapper[5029]: I0313 21:48:52.600139 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:48:52 crc kubenswrapper[5029]: E0313 21:48:52.601232 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:49:05 crc kubenswrapper[5029]: I0313 21:49:05.603566 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:49:05 crc kubenswrapper[5029]: E0313 21:49:05.605101 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:49:20 crc kubenswrapper[5029]: I0313 21:49:20.608585 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:49:20 crc kubenswrapper[5029]: E0313 21:49:20.609609 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:49:33 crc kubenswrapper[5029]: I0313 21:49:33.601158 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:49:33 crc kubenswrapper[5029]: E0313 21:49:33.602125 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:49:45 crc kubenswrapper[5029]: I0313 21:49:45.600487 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:49:45 crc kubenswrapper[5029]: E0313 21:49:45.601690 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:49:58 crc kubenswrapper[5029]: I0313 21:49:58.599597 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:49:58 crc kubenswrapper[5029]: E0313 21:49:58.600709 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.149535 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557310-l649b"] Mar 13 21:50:00 crc kubenswrapper[5029]: E0313 21:50:00.150499 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574a9994-4a63-44ff-98d5-dc305aba3cbf" containerName="oc" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.150517 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="574a9994-4a63-44ff-98d5-dc305aba3cbf" containerName="oc" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.150785 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="574a9994-4a63-44ff-98d5-dc305aba3cbf" containerName="oc" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.152135 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557310-l649b" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.154841 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.155076 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.155781 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.162662 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557310-l649b"] Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.309094 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxjs\" (UniqueName: \"kubernetes.io/projected/431c15e2-dfc7-4e43-bbfe-7d5f8cccb700-kube-api-access-xfxjs\") pod \"auto-csr-approver-29557310-l649b\" (UID: \"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700\") " pod="openshift-infra/auto-csr-approver-29557310-l649b" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.411358 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxjs\" (UniqueName: \"kubernetes.io/projected/431c15e2-dfc7-4e43-bbfe-7d5f8cccb700-kube-api-access-xfxjs\") pod \"auto-csr-approver-29557310-l649b\" (UID: \"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700\") " pod="openshift-infra/auto-csr-approver-29557310-l649b" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.442601 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxjs\" (UniqueName: \"kubernetes.io/projected/431c15e2-dfc7-4e43-bbfe-7d5f8cccb700-kube-api-access-xfxjs\") pod \"auto-csr-approver-29557310-l649b\" (UID: \"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700\") " pod="openshift-infra/auto-csr-approver-29557310-l649b" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.479476 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557310-l649b" Mar 13 21:50:00 crc kubenswrapper[5029]: I0313 21:50:00.889093 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557310-l649b"] Mar 13 21:50:01 crc kubenswrapper[5029]: I0313 21:50:01.528088 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557310-l649b" event={"ID":"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700","Type":"ContainerStarted","Data":"c631c32e2858ff842e2166047d2920810688c61c6df0bd0c6130e9aedeb45f3a"} Mar 13 21:50:02 crc kubenswrapper[5029]: I0313 21:50:02.542668 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557310-l649b" event={"ID":"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700","Type":"ContainerStarted","Data":"f6f47774bb05f5cbbede1c4bbbbf28d873816c695238b5b0cc306e185c1e66ec"} Mar 13 21:50:02 crc kubenswrapper[5029]: I0313 21:50:02.563680 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557310-l649b" podStartSLOduration=1.379128588 podStartE2EDuration="2.563660549s" podCreationTimestamp="2026-03-13 21:50:00 +0000 UTC" firstStartedPulling="2026-03-13 21:50:00.945676956 +0000 UTC m=+4960.961759359" lastFinishedPulling="2026-03-13 21:50:02.130208917 +0000 UTC m=+4962.146291320" observedRunningTime="2026-03-13 21:50:02.562237131 +0000 UTC m=+4962.578319544" watchObservedRunningTime="2026-03-13 21:50:02.563660549 +0000 UTC m=+4962.579742942" Mar 13 21:50:03 crc kubenswrapper[5029]: I0313 21:50:03.554396 5029 generic.go:334] "Generic (PLEG): container finished" podID="431c15e2-dfc7-4e43-bbfe-7d5f8cccb700" containerID="f6f47774bb05f5cbbede1c4bbbbf28d873816c695238b5b0cc306e185c1e66ec" exitCode=0 Mar 13 21:50:03 crc kubenswrapper[5029]: I0313 21:50:03.554594 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557310-l649b" event={"ID":"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700","Type":"ContainerDied","Data":"f6f47774bb05f5cbbede1c4bbbbf28d873816c695238b5b0cc306e185c1e66ec"} Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.028452 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557310-l649b" Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.131057 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfxjs\" (UniqueName: \"kubernetes.io/projected/431c15e2-dfc7-4e43-bbfe-7d5f8cccb700-kube-api-access-xfxjs\") pod \"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700\" (UID: \"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700\") " Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.140553 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431c15e2-dfc7-4e43-bbfe-7d5f8cccb700-kube-api-access-xfxjs" (OuterVolumeSpecName: "kube-api-access-xfxjs") pod "431c15e2-dfc7-4e43-bbfe-7d5f8cccb700" (UID: "431c15e2-dfc7-4e43-bbfe-7d5f8cccb700"). InnerVolumeSpecName "kube-api-access-xfxjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.234222 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfxjs\" (UniqueName: \"kubernetes.io/projected/431c15e2-dfc7-4e43-bbfe-7d5f8cccb700-kube-api-access-xfxjs\") on node \"crc\" DevicePath \"\"" Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.579090 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557310-l649b" event={"ID":"431c15e2-dfc7-4e43-bbfe-7d5f8cccb700","Type":"ContainerDied","Data":"c631c32e2858ff842e2166047d2920810688c61c6df0bd0c6130e9aedeb45f3a"} Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.579156 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c631c32e2858ff842e2166047d2920810688c61c6df0bd0c6130e9aedeb45f3a" Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.579239 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557310-l649b" Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.648578 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557304-24tk6"] Mar 13 21:50:05 crc kubenswrapper[5029]: I0313 21:50:05.658403 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557304-24tk6"] Mar 13 21:50:06 crc kubenswrapper[5029]: I0313 21:50:06.610764 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52f542f-cf94-47a0-af9c-d7755bb6a3a8" path="/var/lib/kubelet/pods/d52f542f-cf94-47a0-af9c-d7755bb6a3a8/volumes" Mar 13 21:50:12 crc kubenswrapper[5029]: I0313 21:50:12.600338 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:50:13 crc kubenswrapper[5029]: I0313 21:50:13.691963 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"2afe5730040ef35af35e0a35fbe930a07c61a8afc8eb6ad9a3d6ef4e635dfa99"} Mar 13 21:50:15 crc kubenswrapper[5029]: I0313 21:50:15.960867 5029 scope.go:117] "RemoveContainer" containerID="167749d69edaba744f1243043fd19aeff09bcd7a4a91a37d57536bb4ea53d9b0" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.035354 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bdv9v"] Mar 13 21:50:22 crc kubenswrapper[5029]: E0313 21:50:22.036720 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431c15e2-dfc7-4e43-bbfe-7d5f8cccb700" containerName="oc" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.036760 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="431c15e2-dfc7-4e43-bbfe-7d5f8cccb700" containerName="oc" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.037066 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="431c15e2-dfc7-4e43-bbfe-7d5f8cccb700" containerName="oc" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.039088 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.061458 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdv9v"] Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.218648 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-catalog-content\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.218754 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-utilities\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.219503 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdrg\" (UniqueName: \"kubernetes.io/projected/b31a42c6-eae5-4223-84f2-6023bdace158-kube-api-access-9hdrg\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.321068 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdrg\" (UniqueName: \"kubernetes.io/projected/b31a42c6-eae5-4223-84f2-6023bdace158-kube-api-access-9hdrg\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.321170 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-catalog-content\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.321201 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-utilities\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.322008 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-utilities\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.322015 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-catalog-content\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.349544 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdrg\" (UniqueName: \"kubernetes.io/projected/b31a42c6-eae5-4223-84f2-6023bdace158-kube-api-access-9hdrg\") pod \"redhat-marketplace-bdv9v\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.365543 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:22 crc kubenswrapper[5029]: I0313 21:50:22.822873 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdv9v"] Mar 13 21:50:23 crc kubenswrapper[5029]: I0313 21:50:23.791300 5029 generic.go:334] "Generic (PLEG): container finished" podID="b31a42c6-eae5-4223-84f2-6023bdace158" containerID="d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea" exitCode=0 Mar 13 21:50:23 crc kubenswrapper[5029]: I0313 21:50:23.792158 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdv9v" event={"ID":"b31a42c6-eae5-4223-84f2-6023bdace158","Type":"ContainerDied","Data":"d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea"} Mar 13 21:50:23 crc kubenswrapper[5029]: I0313 21:50:23.792201 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdv9v" event={"ID":"b31a42c6-eae5-4223-84f2-6023bdace158","Type":"ContainerStarted","Data":"299534095d9aeba9a4219ba9bde75b330988436b75268119d69f02866f2497cf"} Mar 13 21:50:24 crc kubenswrapper[5029]: I0313 21:50:24.806254 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdv9v" event={"ID":"b31a42c6-eae5-4223-84f2-6023bdace158","Type":"ContainerStarted","Data":"bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f"} Mar 13 21:50:25 crc kubenswrapper[5029]: I0313 21:50:25.819352 5029 generic.go:334] "Generic (PLEG): container finished" podID="b31a42c6-eae5-4223-84f2-6023bdace158" containerID="bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f" exitCode=0 Mar 13 21:50:25 crc kubenswrapper[5029]: I0313 21:50:25.819409 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdv9v" event={"ID":"b31a42c6-eae5-4223-84f2-6023bdace158","Type":"ContainerDied","Data":"bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f"} Mar 13 21:50:27 crc kubenswrapper[5029]: I0313 21:50:27.844615 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdv9v" event={"ID":"b31a42c6-eae5-4223-84f2-6023bdace158","Type":"ContainerStarted","Data":"3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1"} Mar 13 21:50:27 crc kubenswrapper[5029]: I0313 21:50:27.875254 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bdv9v" podStartSLOduration=2.842835192 podStartE2EDuration="5.875228317s" podCreationTimestamp="2026-03-13 21:50:22 +0000 UTC" firstStartedPulling="2026-03-13 21:50:23.797692981 +0000 UTC m=+4983.813775384" lastFinishedPulling="2026-03-13 21:50:26.830086106 +0000 UTC m=+4986.846168509" observedRunningTime="2026-03-13 21:50:27.8662254 +0000 UTC m=+4987.882307833" watchObservedRunningTime="2026-03-13 21:50:27.875228317 +0000 UTC m=+4987.891310720" Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.366922 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.368018 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.422010 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.793640 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nkmj6"] Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.796444 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.823816 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nkmj6"] Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.950444 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.991695 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-utilities\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.991834 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-catalog-content\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:32 crc kubenswrapper[5029]: I0313 21:50:32.996052 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmv7p\" (UniqueName: \"kubernetes.io/projected/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-kube-api-access-zmv7p\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:33 crc kubenswrapper[5029]: I0313 21:50:33.100819 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-catalog-content\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:33 crc kubenswrapper[5029]: I0313 21:50:33.101019 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmv7p\" (UniqueName: \"kubernetes.io/projected/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-kube-api-access-zmv7p\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:33 crc kubenswrapper[5029]: I0313 21:50:33.101077 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-utilities\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:33 crc kubenswrapper[5029]: I0313 21:50:33.101986 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-utilities\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:33 crc kubenswrapper[5029]: I0313 21:50:33.102217 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-catalog-content\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:33 crc kubenswrapper[5029]: I0313 21:50:33.155661 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmv7p\" (UniqueName: \"kubernetes.io/projected/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-kube-api-access-zmv7p\") pod \"community-operators-nkmj6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:33 crc kubenswrapper[5029]: I0313 21:50:33.424235 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:34 crc kubenswrapper[5029]: I0313 21:50:34.220793 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nkmj6"] Mar 13 21:50:34 crc kubenswrapper[5029]: W0313 21:50:34.229437 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6cb27f4_0cdc_435f_bdf3_81fddf52cca6.slice/crio-5cc384e2ce4e597f34dbfd2ac79ee10fe739d173e66f945152f116fb6643e497 WatchSource:0}: Error finding container 5cc384e2ce4e597f34dbfd2ac79ee10fe739d173e66f945152f116fb6643e497: Status 404 returned error can't find the container with id 5cc384e2ce4e597f34dbfd2ac79ee10fe739d173e66f945152f116fb6643e497 Mar 13 21:50:34 crc kubenswrapper[5029]: I0313 21:50:34.917891 5029 generic.go:334] "Generic (PLEG): container finished" podID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerID="341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425" exitCode=0 Mar 13 21:50:34 crc kubenswrapper[5029]: I0313 21:50:34.917986 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkmj6" event={"ID":"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6","Type":"ContainerDied","Data":"341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425"} Mar 13 21:50:34 crc kubenswrapper[5029]: I0313 21:50:34.919018 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkmj6" event={"ID":"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6","Type":"ContainerStarted","Data":"5cc384e2ce4e597f34dbfd2ac79ee10fe739d173e66f945152f116fb6643e497"} Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.268194 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdv9v"] Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.268542 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bdv9v" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" containerName="registry-server" containerID="cri-o://3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1" gracePeriod=2 Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.932118 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.933763 5029 generic.go:334] "Generic (PLEG): container finished" podID="b31a42c6-eae5-4223-84f2-6023bdace158" containerID="3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1" exitCode=0 Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.933891 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdv9v" event={"ID":"b31a42c6-eae5-4223-84f2-6023bdace158","Type":"ContainerDied","Data":"3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1"} Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.933954 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bdv9v" event={"ID":"b31a42c6-eae5-4223-84f2-6023bdace158","Type":"ContainerDied","Data":"299534095d9aeba9a4219ba9bde75b330988436b75268119d69f02866f2497cf"} Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.933983 5029 scope.go:117] "RemoveContainer" containerID="3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1" Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.936885 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkmj6" event={"ID":"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6","Type":"ContainerStarted","Data":"89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb"} Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.959441 5029 scope.go:117] "RemoveContainer" containerID="bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f" Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.978907 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hdrg\" (UniqueName: \"kubernetes.io/projected/b31a42c6-eae5-4223-84f2-6023bdace158-kube-api-access-9hdrg\") pod \"b31a42c6-eae5-4223-84f2-6023bdace158\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.987530 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31a42c6-eae5-4223-84f2-6023bdace158-kube-api-access-9hdrg" (OuterVolumeSpecName: "kube-api-access-9hdrg") pod "b31a42c6-eae5-4223-84f2-6023bdace158" (UID: "b31a42c6-eae5-4223-84f2-6023bdace158"). InnerVolumeSpecName "kube-api-access-9hdrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:50:35 crc kubenswrapper[5029]: I0313 21:50:35.992253 5029 scope.go:117] "RemoveContainer" containerID="d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.042522 5029 scope.go:117] "RemoveContainer" containerID="3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1" Mar 13 21:50:36 crc kubenswrapper[5029]: E0313 21:50:36.045648 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1\": container with ID starting with 3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1 not found: ID does not exist" containerID="3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.045703 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1"} err="failed to get container status \"3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1\": rpc error: code = NotFound desc = could not find container \"3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1\": container with ID starting with 3aa4068329b2e18001fc718aa73c9f4bb345b96c42a5d9e29562ff87b26308a1 not found: ID does not exist" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.045733 5029 scope.go:117] "RemoveContainer" containerID="bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f" Mar 13 21:50:36 crc kubenswrapper[5029]: E0313 21:50:36.046583 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f\": container with ID starting with bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f not found: ID does not exist" containerID="bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.046622 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f"} err="failed to get container status \"bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f\": rpc error: code = NotFound desc = could not find container \"bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f\": container with ID starting with bc0d817fae17d40074a354db704d9c668eecb7323e37b61612e80da478b37e5f not found: ID does not exist" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.046640 5029 scope.go:117] "RemoveContainer" containerID="d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea" Mar 13 21:50:36 crc kubenswrapper[5029]: E0313 21:50:36.047019 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea\": container with ID starting with d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea not found: ID does not exist" containerID="d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.047075 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea"} err="failed to get container status \"d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea\": rpc error: code = NotFound desc = could not find container \"d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea\": container with ID starting with d16dc4f0b668e6b9a9f32190212b03e69acd78df76ac46fcbafc84087839aaea not found: ID does not exist" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.081233 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-utilities\") pod \"b31a42c6-eae5-4223-84f2-6023bdace158\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.081411 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-catalog-content\") pod \"b31a42c6-eae5-4223-84f2-6023bdace158\" (UID: \"b31a42c6-eae5-4223-84f2-6023bdace158\") " Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.082190 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hdrg\" (UniqueName: \"kubernetes.io/projected/b31a42c6-eae5-4223-84f2-6023bdace158-kube-api-access-9hdrg\") on node \"crc\" DevicePath \"\"" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.083160 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-utilities" (OuterVolumeSpecName: "utilities") pod "b31a42c6-eae5-4223-84f2-6023bdace158" (UID: "b31a42c6-eae5-4223-84f2-6023bdace158"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.112674 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b31a42c6-eae5-4223-84f2-6023bdace158" (UID: "b31a42c6-eae5-4223-84f2-6023bdace158"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.184740 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.184820 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31a42c6-eae5-4223-84f2-6023bdace158-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.949985 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bdv9v" Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.980415 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdv9v"] Mar 13 21:50:36 crc kubenswrapper[5029]: I0313 21:50:36.993359 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bdv9v"] Mar 13 21:50:37 crc kubenswrapper[5029]: I0313 21:50:37.968037 5029 generic.go:334] "Generic (PLEG): container finished" podID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerID="89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb" exitCode=0 Mar 13 21:50:37 crc kubenswrapper[5029]: I0313 21:50:37.968099 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkmj6" event={"ID":"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6","Type":"ContainerDied","Data":"89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb"} Mar 13 21:50:38 crc kubenswrapper[5029]: I0313 21:50:38.612225 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" path="/var/lib/kubelet/pods/b31a42c6-eae5-4223-84f2-6023bdace158/volumes" Mar 13 21:50:38 crc kubenswrapper[5029]: I0313 21:50:38.983092 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkmj6" event={"ID":"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6","Type":"ContainerStarted","Data":"fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc"} Mar 13 21:50:39 crc kubenswrapper[5029]: I0313 21:50:39.016044 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nkmj6" podStartSLOduration=3.53421491 podStartE2EDuration="7.016016395s" podCreationTimestamp="2026-03-13 21:50:32 +0000 UTC" firstStartedPulling="2026-03-13 21:50:34.920677912 +0000 UTC m=+4994.936760325" lastFinishedPulling="2026-03-13 21:50:38.402479407 +0000 UTC m=+4998.418561810" observedRunningTime="2026-03-13 21:50:39.005560649 +0000 UTC m=+4999.021643102" watchObservedRunningTime="2026-03-13 21:50:39.016016395 +0000 UTC m=+4999.032098808" Mar 13 21:50:43 crc kubenswrapper[5029]: I0313 21:50:43.424873 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:43 crc kubenswrapper[5029]: I0313 21:50:43.425715 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:43 crc kubenswrapper[5029]: I0313 21:50:43.476289 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:44 crc kubenswrapper[5029]: I0313 21:50:44.089321 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:44 crc kubenswrapper[5029]: I0313 21:50:44.153243 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nkmj6"] Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.055560 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nkmj6" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerName="registry-server" containerID="cri-o://fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc" gracePeriod=2 Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.585686 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.682071 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-utilities\") pod \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.682258 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-catalog-content\") pod \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.682321 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmv7p\" (UniqueName: \"kubernetes.io/projected/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-kube-api-access-zmv7p\") pod \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\" (UID: \"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6\") " Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.683336 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-utilities" (OuterVolumeSpecName: "utilities") pod "a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" (UID: "a6cb27f4-0cdc-435f-bdf3-81fddf52cca6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.692734 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-kube-api-access-zmv7p" (OuterVolumeSpecName: "kube-api-access-zmv7p") pod "a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" (UID: "a6cb27f4-0cdc-435f-bdf3-81fddf52cca6"). InnerVolumeSpecName "kube-api-access-zmv7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.761946 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" (UID: "a6cb27f4-0cdc-435f-bdf3-81fddf52cca6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.785589 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.785997 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:50:46 crc kubenswrapper[5029]: I0313 21:50:46.786012 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmv7p\" (UniqueName: \"kubernetes.io/projected/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6-kube-api-access-zmv7p\") on node \"crc\" DevicePath \"\"" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.068215 5029 generic.go:334] "Generic (PLEG): container finished" podID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerID="fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc" exitCode=0 Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.068265 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkmj6" event={"ID":"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6","Type":"ContainerDied","Data":"fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc"} Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.068297 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkmj6" event={"ID":"a6cb27f4-0cdc-435f-bdf3-81fddf52cca6","Type":"ContainerDied","Data":"5cc384e2ce4e597f34dbfd2ac79ee10fe739d173e66f945152f116fb6643e497"} Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.068293 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkmj6" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.068314 5029 scope.go:117] "RemoveContainer" containerID="fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.164221 5029 scope.go:117] "RemoveContainer" containerID="89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.174385 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nkmj6"] Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.187505 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nkmj6"] Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.204176 5029 scope.go:117] "RemoveContainer" containerID="341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.248286 5029 scope.go:117] "RemoveContainer" containerID="fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc" Mar 13 21:50:47 crc kubenswrapper[5029]: E0313 21:50:47.249144 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc\": container with ID starting with fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc not found: ID does not exist" containerID="fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.249200 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc"} err="failed to get container status \"fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc\": rpc error: code = NotFound desc = could not find container \"fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc\": container with ID starting with fa605a76e54a5dd90c2d7c8c5370061e9aac97b61d6c34c5c8d35cedab1a26dc not found: ID does not exist" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.249241 5029 scope.go:117] "RemoveContainer" containerID="89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb" Mar 13 21:50:47 crc kubenswrapper[5029]: E0313 21:50:47.249731 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb\": container with ID starting with 89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb not found: ID does not exist" containerID="89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.249752 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb"} err="failed to get container status \"89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb\": rpc error: code = NotFound desc = could not find container \"89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb\": container with ID starting with 89dec82b086b973b04e82040b216a8a7ae7b8d2d8234d167accffaf54f2094bb not found: ID does not exist" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.249763 5029 scope.go:117] "RemoveContainer" containerID="341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425" Mar 13 21:50:47 crc kubenswrapper[5029]: E0313 21:50:47.250119 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425\": container with ID starting with 341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425 not found: ID does not exist" containerID="341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425" Mar 13 21:50:47 crc kubenswrapper[5029]: I0313 21:50:47.250174 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425"} err="failed to get container status \"341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425\": rpc error: code = NotFound desc = could not find container \"341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425\": container with ID starting with 341ef80f7faabaef1f38f161ce45af85ec7e94cfbea8eada5e8288d38af21425 not found: ID does not exist" Mar 13 21:50:48 crc kubenswrapper[5029]: I0313 21:50:48.612392 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" path="/var/lib/kubelet/pods/a6cb27f4-0cdc-435f-bdf3-81fddf52cca6/volumes" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.147206 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557312-4v7kt"] Mar 13 21:52:00 crc kubenswrapper[5029]: E0313 21:52:00.148470 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" containerName="registry-server" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.148488 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" containerName="registry-server" Mar 13 21:52:00 crc kubenswrapper[5029]: E0313 21:52:00.148508 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerName="extract-utilities" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.148515 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerName="extract-utilities" Mar 13 21:52:00 crc kubenswrapper[5029]: E0313 21:52:00.148526 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerName="registry-server" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.148534 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerName="registry-server" Mar 13 21:52:00 crc kubenswrapper[5029]: E0313 21:52:00.148549 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerName="extract-content" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.148556 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerName="extract-content" Mar 13 21:52:00 crc kubenswrapper[5029]: E0313 21:52:00.148580 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" containerName="extract-utilities" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.148587 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" containerName="extract-utilities" Mar 13 21:52:00 crc kubenswrapper[5029]: E0313 21:52:00.148610 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" containerName="extract-content" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.148619 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" containerName="extract-content" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.148884 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cb27f4-0cdc-435f-bdf3-81fddf52cca6" containerName="registry-server" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.148909 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="b31a42c6-eae5-4223-84f2-6023bdace158" containerName="registry-server" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.149664 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557312-4v7kt" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.153089 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.153574 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.153714 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.164707 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557312-4v7kt"] Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.185843 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwgb\" (UniqueName: \"kubernetes.io/projected/15a27b5f-7a3b-4064-b605-26cb7b044d52-kube-api-access-fmwgb\") pod \"auto-csr-approver-29557312-4v7kt\" (UID: \"15a27b5f-7a3b-4064-b605-26cb7b044d52\") " pod="openshift-infra/auto-csr-approver-29557312-4v7kt" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.287691 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwgb\" (UniqueName: \"kubernetes.io/projected/15a27b5f-7a3b-4064-b605-26cb7b044d52-kube-api-access-fmwgb\") pod \"auto-csr-approver-29557312-4v7kt\" (UID: \"15a27b5f-7a3b-4064-b605-26cb7b044d52\") " pod="openshift-infra/auto-csr-approver-29557312-4v7kt" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.318838 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwgb\" (UniqueName: \"kubernetes.io/projected/15a27b5f-7a3b-4064-b605-26cb7b044d52-kube-api-access-fmwgb\") pod \"auto-csr-approver-29557312-4v7kt\" (UID: \"15a27b5f-7a3b-4064-b605-26cb7b044d52\") " pod="openshift-infra/auto-csr-approver-29557312-4v7kt" Mar 13 21:52:00 crc kubenswrapper[5029]: I0313 21:52:00.474117 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557312-4v7kt" Mar 13 21:52:01 crc kubenswrapper[5029]: I0313 21:52:00.997825 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557312-4v7kt"] Mar 13 21:52:01 crc kubenswrapper[5029]: I0313 21:52:01.027609 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:52:01 crc kubenswrapper[5029]: I0313 21:52:01.963700 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557312-4v7kt" event={"ID":"15a27b5f-7a3b-4064-b605-26cb7b044d52","Type":"ContainerStarted","Data":"41c8d50644084f30f43418eed8de1d576e9d00b5cbb6f0ef1eb9d4aae66c5ce5"} Mar 13 21:52:02 crc kubenswrapper[5029]: I0313 21:52:02.977603 5029 generic.go:334] "Generic (PLEG): container finished" podID="15a27b5f-7a3b-4064-b605-26cb7b044d52" containerID="e4acac8f446d947d5b47c53e0fbbd23ebfd801328031b90f9ef2ef8149743e62" exitCode=0 Mar 13 21:52:02 crc kubenswrapper[5029]: I0313 21:52:02.977675 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557312-4v7kt" event={"ID":"15a27b5f-7a3b-4064-b605-26cb7b044d52","Type":"ContainerDied","Data":"e4acac8f446d947d5b47c53e0fbbd23ebfd801328031b90f9ef2ef8149743e62"} Mar 13 21:52:04 crc kubenswrapper[5029]: I0313 21:52:04.372157 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557312-4v7kt" Mar 13 21:52:04 crc kubenswrapper[5029]: I0313 21:52:04.395500 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmwgb\" (UniqueName: \"kubernetes.io/projected/15a27b5f-7a3b-4064-b605-26cb7b044d52-kube-api-access-fmwgb\") pod \"15a27b5f-7a3b-4064-b605-26cb7b044d52\" (UID: \"15a27b5f-7a3b-4064-b605-26cb7b044d52\") " Mar 13 21:52:04 crc kubenswrapper[5029]: I0313 21:52:04.404228 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a27b5f-7a3b-4064-b605-26cb7b044d52-kube-api-access-fmwgb" (OuterVolumeSpecName: "kube-api-access-fmwgb") pod "15a27b5f-7a3b-4064-b605-26cb7b044d52" (UID: "15a27b5f-7a3b-4064-b605-26cb7b044d52"). InnerVolumeSpecName "kube-api-access-fmwgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:52:04 crc kubenswrapper[5029]: I0313 21:52:04.498773 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmwgb\" (UniqueName: \"kubernetes.io/projected/15a27b5f-7a3b-4064-b605-26cb7b044d52-kube-api-access-fmwgb\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:05 crc kubenswrapper[5029]: I0313 21:52:05.002676 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557312-4v7kt" event={"ID":"15a27b5f-7a3b-4064-b605-26cb7b044d52","Type":"ContainerDied","Data":"41c8d50644084f30f43418eed8de1d576e9d00b5cbb6f0ef1eb9d4aae66c5ce5"} Mar 13 21:52:05 crc kubenswrapper[5029]: I0313 21:52:05.002753 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41c8d50644084f30f43418eed8de1d576e9d00b5cbb6f0ef1eb9d4aae66c5ce5" Mar 13 21:52:05 crc kubenswrapper[5029]: I0313 21:52:05.002879 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557312-4v7kt" Mar 13 21:52:05 crc kubenswrapper[5029]: I0313 21:52:05.451710 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557306-xjsnh"] Mar 13 21:52:05 crc kubenswrapper[5029]: I0313 21:52:05.464526 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557306-xjsnh"] Mar 13 21:52:06 crc kubenswrapper[5029]: I0313 21:52:06.626337 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf37533-cbe3-48d8-999b-26aca8696d76" path="/var/lib/kubelet/pods/1cf37533-cbe3-48d8-999b-26aca8696d76/volumes" Mar 13 21:52:16 crc kubenswrapper[5029]: I0313 21:52:16.143444 5029 scope.go:117] "RemoveContainer" containerID="3b6c8442a563f4f94aa95d83b2c2f2f061fe0fb7b60c8298180defdff87ed99f" Mar 13 21:52:20 crc kubenswrapper[5029]: I0313 21:52:20.155006 5029 generic.go:334] "Generic (PLEG): container finished" podID="ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" containerID="f17cb623d390073b1880bcb1cd96b9be0c3a56713608483dd3e8b2dbe6c35ee4" exitCode=0 Mar 13 21:52:20 crc kubenswrapper[5029]: I0313 21:52:20.155136 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced","Type":"ContainerDied","Data":"f17cb623d390073b1880bcb1cd96b9be0c3a56713608483dd3e8b2dbe6c35ee4"} Mar 13 21:52:20 crc kubenswrapper[5029]: I0313 21:52:20.779896 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="97961996-b234-441c-ba7c-2c479dfae7f4" containerName="galera" probeResult="failure" output="command timed out" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.580170 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.630554 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ca-certs\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.630644 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.680543 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.710792 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.733436 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-temporary\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.733528 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjjn9\" (UniqueName: \"kubernetes.io/projected/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-kube-api-access-fjjn9\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.733950 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ssh-key\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.735216 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.735475 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.735650 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-config-data\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.735691 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config-secret\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.735756 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-workdir\") pod \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\" (UID: \"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced\") " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.736793 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-config-data" (OuterVolumeSpecName: "config-data") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.737831 5029 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.737871 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.737885 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.737897 5029 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.740880 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.741608 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-kube-api-access-fjjn9" (OuterVolumeSpecName: "kube-api-access-fjjn9") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "kube-api-access-fjjn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.748477 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.767599 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.769380 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" (UID: "ac9d86b5-6cef-43ea-90c2-3aebba7f6ced"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.841215 5029 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.841724 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjjn9\" (UniqueName: \"kubernetes.io/projected/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-kube-api-access-fjjn9\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.841738 5029 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.843215 5029 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.843246 5029 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9d86b5-6cef-43ea-90c2-3aebba7f6ced-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.865183 5029 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 13 21:52:21 crc kubenswrapper[5029]: I0313 21:52:21.945502 5029 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 13 21:52:22 crc kubenswrapper[5029]: I0313 21:52:22.176529 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ac9d86b5-6cef-43ea-90c2-3aebba7f6ced","Type":"ContainerDied","Data":"c38a8855e2ce68d0a08fd0f21360f4c39642cb53570353f053dc459829769486"} Mar 13 21:52:22 crc kubenswrapper[5029]: I0313 21:52:22.176587 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38a8855e2ce68d0a08fd0f21360f4c39642cb53570353f053dc459829769486" Mar 13 21:52:22 crc kubenswrapper[5029]: I0313 21:52:22.176626 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 13 21:52:31 crc kubenswrapper[5029]: I0313 21:52:31.949786 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:52:31 crc kubenswrapper[5029]: I0313 21:52:31.950595 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.860653 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9kdhx/must-gather-v6g8n"] Mar 13 21:52:52 crc kubenswrapper[5029]: E0313 21:52:52.861980 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" containerName="tempest-tests-tempest-tests-runner" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.861996 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" containerName="tempest-tests-tempest-tests-runner" Mar 13 21:52:52 crc kubenswrapper[5029]: E0313 21:52:52.862006 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a27b5f-7a3b-4064-b605-26cb7b044d52" containerName="oc" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.862012 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a27b5f-7a3b-4064-b605-26cb7b044d52" containerName="oc" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.862195 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9d86b5-6cef-43ea-90c2-3aebba7f6ced" containerName="tempest-tests-tempest-tests-runner" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.862229 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a27b5f-7a3b-4064-b605-26cb7b044d52" containerName="oc" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.863557 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.866809 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9kdhx"/"kube-root-ca.crt" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.873549 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9kdhx/must-gather-v6g8n"] Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.875163 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9kdhx"/"openshift-service-ca.crt" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.875167 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9kdhx"/"default-dockercfg-rsq7z" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.967345 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/69272fdf-af43-4ca2-8597-3f4d2fc412da-kube-api-access-z76bt\") pod \"must-gather-v6g8n\" (UID: \"69272fdf-af43-4ca2-8597-3f4d2fc412da\") " pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 21:52:52 crc kubenswrapper[5029]: I0313 21:52:52.967562 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69272fdf-af43-4ca2-8597-3f4d2fc412da-must-gather-output\") pod \"must-gather-v6g8n\" (UID: \"69272fdf-af43-4ca2-8597-3f4d2fc412da\") " pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 21:52:53 crc kubenswrapper[5029]: I0313 21:52:53.070292 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/69272fdf-af43-4ca2-8597-3f4d2fc412da-kube-api-access-z76bt\") pod \"must-gather-v6g8n\" (UID: \"69272fdf-af43-4ca2-8597-3f4d2fc412da\") " pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 21:52:53 crc kubenswrapper[5029]: I0313 21:52:53.071417 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69272fdf-af43-4ca2-8597-3f4d2fc412da-must-gather-output\") pod \"must-gather-v6g8n\" (UID: \"69272fdf-af43-4ca2-8597-3f4d2fc412da\") " pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 21:52:53 crc kubenswrapper[5029]: I0313 21:52:53.072284 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69272fdf-af43-4ca2-8597-3f4d2fc412da-must-gather-output\") pod \"must-gather-v6g8n\" (UID: \"69272fdf-af43-4ca2-8597-3f4d2fc412da\") " pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 21:52:53 crc kubenswrapper[5029]: I0313 21:52:53.101825 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/69272fdf-af43-4ca2-8597-3f4d2fc412da-kube-api-access-z76bt\") pod \"must-gather-v6g8n\" (UID: \"69272fdf-af43-4ca2-8597-3f4d2fc412da\") " pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 21:52:53 crc kubenswrapper[5029]: I0313 21:52:53.185759 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 21:52:53 crc kubenswrapper[5029]: I0313 21:52:53.932342 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9kdhx/must-gather-v6g8n"] Mar 13 21:52:53 crc kubenswrapper[5029]: W0313 21:52:53.941057 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69272fdf_af43_4ca2_8597_3f4d2fc412da.slice/crio-1fe8dc3d3d3ddf30389ae2377ab4c05042b9fe2a08799484814ff43843fa899c WatchSource:0}: Error finding container 1fe8dc3d3d3ddf30389ae2377ab4c05042b9fe2a08799484814ff43843fa899c: Status 404 returned error can't find the container with id 1fe8dc3d3d3ddf30389ae2377ab4c05042b9fe2a08799484814ff43843fa899c Mar 13 21:52:54 crc kubenswrapper[5029]: I0313 21:52:54.555218 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" event={"ID":"69272fdf-af43-4ca2-8597-3f4d2fc412da","Type":"ContainerStarted","Data":"1fe8dc3d3d3ddf30389ae2377ab4c05042b9fe2a08799484814ff43843fa899c"} Mar 13 21:53:01 crc kubenswrapper[5029]: I0313 21:53:01.950447 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:53:01 crc kubenswrapper[5029]: I0313 21:53:01.951247 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:53:02 crc kubenswrapper[5029]: I0313 21:53:02.647556 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" event={"ID":"69272fdf-af43-4ca2-8597-3f4d2fc412da","Type":"ContainerStarted","Data":"a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f"} Mar 13 21:53:02 crc kubenswrapper[5029]: I0313 21:53:02.647993 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" event={"ID":"69272fdf-af43-4ca2-8597-3f4d2fc412da","Type":"ContainerStarted","Data":"ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df"} Mar 13 21:53:02 crc kubenswrapper[5029]: I0313 21:53:02.679567 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" podStartSLOduration=3.72433615 podStartE2EDuration="10.679527393s" podCreationTimestamp="2026-03-13 21:52:52 +0000 UTC" firstStartedPulling="2026-03-13 21:52:53.943889381 +0000 UTC m=+5133.959971774" lastFinishedPulling="2026-03-13 21:53:00.899080604 +0000 UTC m=+5140.915163017" observedRunningTime="2026-03-13 21:53:02.669813945 +0000 UTC m=+5142.685896428" watchObservedRunningTime="2026-03-13 21:53:02.679527393 +0000 UTC m=+5142.695609866" Mar 13 21:53:07 crc kubenswrapper[5029]: E0313 21:53:07.524893 5029 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.181:39812->38.102.83.181:36147: read tcp 38.102.83.181:39812->38.102.83.181:36147: read: connection reset by peer Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.665644 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-g49x5"] Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.671236 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.793904 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a4b6e7f-443a-4b18-a1b4-84269b03935a-host\") pod \"crc-debug-g49x5\" (UID: \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\") " pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.793984 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqxst\" (UniqueName: \"kubernetes.io/projected/9a4b6e7f-443a-4b18-a1b4-84269b03935a-kube-api-access-rqxst\") pod \"crc-debug-g49x5\" (UID: \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\") " pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.897060 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a4b6e7f-443a-4b18-a1b4-84269b03935a-host\") pod \"crc-debug-g49x5\" (UID: \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\") " pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.897284 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqxst\" (UniqueName: \"kubernetes.io/projected/9a4b6e7f-443a-4b18-a1b4-84269b03935a-kube-api-access-rqxst\") pod \"crc-debug-g49x5\" (UID: \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\") " pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.897289 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a4b6e7f-443a-4b18-a1b4-84269b03935a-host\") pod \"crc-debug-g49x5\" (UID: \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\") " pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.927424 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqxst\" (UniqueName: \"kubernetes.io/projected/9a4b6e7f-443a-4b18-a1b4-84269b03935a-kube-api-access-rqxst\") pod \"crc-debug-g49x5\" (UID: \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\") " pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:53:08 crc kubenswrapper[5029]: I0313 21:53:08.995632 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:53:09 crc kubenswrapper[5029]: W0313 21:53:09.069500 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a4b6e7f_443a_4b18_a1b4_84269b03935a.slice/crio-0cb3e39ccb7decab44c355e4355d99a5591a63e142588b0a28c9c67207f868a4 WatchSource:0}: Error finding container 0cb3e39ccb7decab44c355e4355d99a5591a63e142588b0a28c9c67207f868a4: Status 404 returned error can't find the container with id 0cb3e39ccb7decab44c355e4355d99a5591a63e142588b0a28c9c67207f868a4 Mar 13 21:53:09 crc kubenswrapper[5029]: I0313 21:53:09.719273 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" event={"ID":"9a4b6e7f-443a-4b18-a1b4-84269b03935a","Type":"ContainerStarted","Data":"0cb3e39ccb7decab44c355e4355d99a5591a63e142588b0a28c9c67207f868a4"} Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.119778 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cpcs9"] Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.125676 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.133116 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cpcs9"] Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.236875 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-catalog-content\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.237009 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-utilities\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.237047 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2wg\" (UniqueName: \"kubernetes.io/projected/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-kube-api-access-cb2wg\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.339030 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-utilities\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.339102 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2wg\" (UniqueName: \"kubernetes.io/projected/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-kube-api-access-cb2wg\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.339224 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-catalog-content\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.339570 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-utilities\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.339803 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-catalog-content\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.366793 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2wg\" (UniqueName: \"kubernetes.io/projected/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-kube-api-access-cb2wg\") pod \"redhat-operators-cpcs9\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:14 crc kubenswrapper[5029]: I0313 21:53:14.460545 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:23 crc kubenswrapper[5029]: E0313 21:53:23.946695 5029 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Mar 13 21:53:23 crc kubenswrapper[5029]: E0313 21:53:23.948667 5029 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqxst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-g49x5_openshift-must-gather-9kdhx(9a4b6e7f-443a-4b18-a1b4-84269b03935a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 21:53:23 crc kubenswrapper[5029]: E0313 21:53:23.950230 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" podUID="9a4b6e7f-443a-4b18-a1b4-84269b03935a" Mar 13 21:53:24 crc kubenswrapper[5029]: I0313 21:53:24.139393 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cpcs9"] Mar 13 21:53:24 crc kubenswrapper[5029]: I0313 21:53:24.898931 5029 generic.go:334] "Generic (PLEG): container finished" podID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerID="8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127" exitCode=0 Mar 13 21:53:24 crc kubenswrapper[5029]: I0313 21:53:24.899030 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cpcs9" event={"ID":"9f2c9c53-e241-44c8-93c0-d2e53d77bf26","Type":"ContainerDied","Data":"8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127"} Mar 13 21:53:24 crc kubenswrapper[5029]: I0313 21:53:24.899466 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cpcs9" event={"ID":"9f2c9c53-e241-44c8-93c0-d2e53d77bf26","Type":"ContainerStarted","Data":"7c76304222b24120dc048625644133f43e6c46c4c53d7cd2d51617c654015024"} Mar 13 21:53:24 crc kubenswrapper[5029]: E0313 21:53:24.902993 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" podUID="9a4b6e7f-443a-4b18-a1b4-84269b03935a" Mar 13 21:53:25 crc kubenswrapper[5029]: I0313 21:53:25.913214 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cpcs9" event={"ID":"9f2c9c53-e241-44c8-93c0-d2e53d77bf26","Type":"ContainerStarted","Data":"de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8"} Mar 13 21:53:26 crc kubenswrapper[5029]: I0313 21:53:26.935415 5029 generic.go:334] "Generic (PLEG): container finished" podID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerID="de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8" exitCode=0 Mar 13 21:53:26 crc kubenswrapper[5029]: I0313 21:53:26.935490 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cpcs9" event={"ID":"9f2c9c53-e241-44c8-93c0-d2e53d77bf26","Type":"ContainerDied","Data":"de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8"} Mar 13 21:53:28 crc kubenswrapper[5029]: I0313 21:53:28.966411 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cpcs9" event={"ID":"9f2c9c53-e241-44c8-93c0-d2e53d77bf26","Type":"ContainerStarted","Data":"696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42"} Mar 13 21:53:29 crc kubenswrapper[5029]: I0313 21:53:29.008162 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cpcs9" podStartSLOduration=12.569598667 podStartE2EDuration="15.008122284s" podCreationTimestamp="2026-03-13 21:53:14 +0000 UTC" firstStartedPulling="2026-03-13 21:53:24.903808516 +0000 UTC m=+5164.919890929" lastFinishedPulling="2026-03-13 21:53:27.342332143 +0000 UTC m=+5167.358414546" observedRunningTime="2026-03-13 21:53:28.993912291 +0000 UTC m=+5169.009994714" watchObservedRunningTime="2026-03-13 21:53:29.008122284 +0000 UTC m=+5169.024204677" Mar 13 21:53:31 crc kubenswrapper[5029]: I0313 21:53:31.950470 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:53:31 crc kubenswrapper[5029]: I0313 21:53:31.951230 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:53:31 crc kubenswrapper[5029]: I0313 21:53:31.951287 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:53:31 crc kubenswrapper[5029]: I0313 21:53:31.952241 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2afe5730040ef35af35e0a35fbe930a07c61a8afc8eb6ad9a3d6ef4e635dfa99"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:53:31 crc kubenswrapper[5029]: I0313 21:53:31.952299 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://2afe5730040ef35af35e0a35fbe930a07c61a8afc8eb6ad9a3d6ef4e635dfa99" gracePeriod=600 Mar 13 21:53:34 crc kubenswrapper[5029]: I0313 21:53:34.026390 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="2afe5730040ef35af35e0a35fbe930a07c61a8afc8eb6ad9a3d6ef4e635dfa99" exitCode=0 Mar 13 21:53:34 crc kubenswrapper[5029]: I0313 21:53:34.026486 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"2afe5730040ef35af35e0a35fbe930a07c61a8afc8eb6ad9a3d6ef4e635dfa99"} Mar 13 21:53:34 crc kubenswrapper[5029]: I0313 21:53:34.027249 5029 scope.go:117] "RemoveContainer" containerID="d2f985e1f24b7b08e35da717dbbc482fa45a5d71053b788b444c48863ba86d53" Mar 13 21:53:34 crc kubenswrapper[5029]: I0313 21:53:34.460901 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:34 crc kubenswrapper[5029]: I0313 21:53:34.460963 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:35 crc kubenswrapper[5029]: I0313 21:53:35.513501 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cpcs9" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="registry-server" probeResult="failure" output=< Mar 13 21:53:35 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 21:53:35 crc kubenswrapper[5029]: > Mar 13 21:53:36 crc kubenswrapper[5029]: I0313 21:53:36.136891 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057"} Mar 13 21:53:41 crc kubenswrapper[5029]: I0313 21:53:41.193963 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" event={"ID":"9a4b6e7f-443a-4b18-a1b4-84269b03935a","Type":"ContainerStarted","Data":"0825a54fceb1381a2822ccf6bcbc2466e86f3d832fa59d7877694185da8e1da5"} Mar 13 21:53:41 crc kubenswrapper[5029]: I0313 21:53:41.219488 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" podStartSLOduration=2.259259678 podStartE2EDuration="33.219461351s" podCreationTimestamp="2026-03-13 21:53:08 +0000 UTC" firstStartedPulling="2026-03-13 21:53:09.073642469 +0000 UTC m=+5149.089724872" lastFinishedPulling="2026-03-13 21:53:40.033844142 +0000 UTC m=+5180.049926545" observedRunningTime="2026-03-13 21:53:41.21220759 +0000 UTC m=+5181.228289983" watchObservedRunningTime="2026-03-13 21:53:41.219461351 +0000 UTC m=+5181.235543764" Mar 13 21:53:44 crc kubenswrapper[5029]: I0313 21:53:44.531986 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:44 crc kubenswrapper[5029]: I0313 21:53:44.617155 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:45 crc kubenswrapper[5029]: I0313 21:53:45.304451 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cpcs9"] Mar 13 21:53:46 crc kubenswrapper[5029]: I0313 21:53:46.238887 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cpcs9" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="registry-server" containerID="cri-o://696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42" gracePeriod=2 Mar 13 21:53:46 crc kubenswrapper[5029]: I0313 21:53:46.849316 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:46 crc kubenswrapper[5029]: I0313 21:53:46.926105 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb2wg\" (UniqueName: \"kubernetes.io/projected/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-kube-api-access-cb2wg\") pod \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " Mar 13 21:53:46 crc kubenswrapper[5029]: I0313 21:53:46.926162 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-utilities\") pod \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " Mar 13 21:53:46 crc kubenswrapper[5029]: I0313 21:53:46.926308 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-catalog-content\") pod \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\" (UID: \"9f2c9c53-e241-44c8-93c0-d2e53d77bf26\") " Mar 13 21:53:46 crc kubenswrapper[5029]: I0313 21:53:46.927379 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-utilities" (OuterVolumeSpecName: "utilities") pod "9f2c9c53-e241-44c8-93c0-d2e53d77bf26" (UID: "9f2c9c53-e241-44c8-93c0-d2e53d77bf26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:53:46 crc kubenswrapper[5029]: I0313 21:53:46.956252 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-kube-api-access-cb2wg" (OuterVolumeSpecName: "kube-api-access-cb2wg") pod "9f2c9c53-e241-44c8-93c0-d2e53d77bf26" (UID: "9f2c9c53-e241-44c8-93c0-d2e53d77bf26"). InnerVolumeSpecName "kube-api-access-cb2wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.028732 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb2wg\" (UniqueName: \"kubernetes.io/projected/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-kube-api-access-cb2wg\") on node \"crc\" DevicePath \"\"" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.028776 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.120805 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f2c9c53-e241-44c8-93c0-d2e53d77bf26" (UID: "9f2c9c53-e241-44c8-93c0-d2e53d77bf26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.131493 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f2c9c53-e241-44c8-93c0-d2e53d77bf26-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.250898 5029 generic.go:334] "Generic (PLEG): container finished" podID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerID="696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42" exitCode=0 Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.250955 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cpcs9" event={"ID":"9f2c9c53-e241-44c8-93c0-d2e53d77bf26","Type":"ContainerDied","Data":"696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42"} Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.250993 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cpcs9" event={"ID":"9f2c9c53-e241-44c8-93c0-d2e53d77bf26","Type":"ContainerDied","Data":"7c76304222b24120dc048625644133f43e6c46c4c53d7cd2d51617c654015024"} Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.251024 5029 scope.go:117] "RemoveContainer" containerID="696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.251199 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cpcs9" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.288958 5029 scope.go:117] "RemoveContainer" containerID="de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.289704 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cpcs9"] Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.315605 5029 scope.go:117] "RemoveContainer" containerID="8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.334591 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cpcs9"] Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.369412 5029 scope.go:117] "RemoveContainer" containerID="696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42" Mar 13 21:53:47 crc kubenswrapper[5029]: E0313 21:53:47.370086 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42\": container with ID starting with 696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42 not found: ID does not exist" containerID="696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.370131 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42"} err="failed to get container status \"696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42\": rpc error: code = NotFound desc = could not find container \"696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42\": container with ID starting with 696bbab90ef6aed81c341040f6744c2f2dc7c9dbcb65390e879ba6a523ce9c42 not found: ID does not exist" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.370165 5029 scope.go:117] "RemoveContainer" containerID="de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8" Mar 13 21:53:47 crc kubenswrapper[5029]: E0313 21:53:47.370620 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8\": container with ID starting with de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8 not found: ID does not exist" containerID="de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.370641 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8"} err="failed to get container status \"de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8\": rpc error: code = NotFound desc = could not find container \"de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8\": container with ID starting with de82f1506147595b2292c3d35dc2c1f7a71325986a04efcb8e5b7d253566c0e8 not found: ID does not exist" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.370657 5029 scope.go:117] "RemoveContainer" containerID="8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127" Mar 13 21:53:47 crc kubenswrapper[5029]: E0313 21:53:47.370921 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127\": container with ID starting with 8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127 not found: ID does not exist" containerID="8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127" Mar 13 21:53:47 crc kubenswrapper[5029]: I0313 21:53:47.370950 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127"} err="failed to get container status \"8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127\": rpc error: code = NotFound desc = could not find container \"8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127\": container with ID starting with 8faa91d57995ee1cb3517ff1f2c06e219f1e5729b4bc5b4446d210f1724b6127 not found: ID does not exist" Mar 13 21:53:48 crc kubenswrapper[5029]: I0313 21:53:48.614787 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" path="/var/lib/kubelet/pods/9f2c9c53-e241-44c8-93c0-d2e53d77bf26/volumes" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.155867 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557314-jp4cs"] Mar 13 21:54:00 crc kubenswrapper[5029]: E0313 21:54:00.157201 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="extract-content" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.157222 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="extract-content" Mar 13 21:54:00 crc kubenswrapper[5029]: E0313 21:54:00.157235 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="extract-utilities" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.157245 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="extract-utilities" Mar 13 21:54:00 crc kubenswrapper[5029]: E0313 21:54:00.157291 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="registry-server" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.157299 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="registry-server" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.157596 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f2c9c53-e241-44c8-93c0-d2e53d77bf26" containerName="registry-server" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.158552 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557314-jp4cs" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.162194 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.162284 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.162529 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.167240 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9db9\" (UniqueName: \"kubernetes.io/projected/020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3-kube-api-access-j9db9\") pod \"auto-csr-approver-29557314-jp4cs\" (UID: \"020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3\") " pod="openshift-infra/auto-csr-approver-29557314-jp4cs" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.168847 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557314-jp4cs"] Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.270069 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9db9\" (UniqueName: \"kubernetes.io/projected/020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3-kube-api-access-j9db9\") pod \"auto-csr-approver-29557314-jp4cs\" (UID: \"020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3\") " pod="openshift-infra/auto-csr-approver-29557314-jp4cs" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.301892 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9db9\" (UniqueName: \"kubernetes.io/projected/020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3-kube-api-access-j9db9\") pod \"auto-csr-approver-29557314-jp4cs\" (UID: \"020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3\") " pod="openshift-infra/auto-csr-approver-29557314-jp4cs" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.477314 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557314-jp4cs" Mar 13 21:54:00 crc kubenswrapper[5029]: I0313 21:54:00.984907 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557314-jp4cs"] Mar 13 21:54:01 crc kubenswrapper[5029]: I0313 21:54:01.387271 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557314-jp4cs" event={"ID":"020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3","Type":"ContainerStarted","Data":"f3620c809eb94a1db4a0d50d3e96b76b342766eaeab57ad94f8e04fa6cc1c6c9"} Mar 13 21:54:03 crc kubenswrapper[5029]: I0313 21:54:03.419723 5029 generic.go:334] "Generic (PLEG): container finished" podID="020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3" containerID="90c13895a650bb86923f1ba10feb29fec99a443f3f85b65a8c5768758ad4d216" exitCode=0 Mar 13 21:54:03 crc kubenswrapper[5029]: I0313 21:54:03.420214 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557314-jp4cs" event={"ID":"020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3","Type":"ContainerDied","Data":"90c13895a650bb86923f1ba10feb29fec99a443f3f85b65a8c5768758ad4d216"} Mar 13 21:54:04 crc kubenswrapper[5029]: I0313 21:54:04.829540 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557314-jp4cs" Mar 13 21:54:04 crc kubenswrapper[5029]: I0313 21:54:04.970064 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9db9\" (UniqueName: \"kubernetes.io/projected/020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3-kube-api-access-j9db9\") pod \"020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3\" (UID: \"020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3\") " Mar 13 21:54:05 crc kubenswrapper[5029]: I0313 21:54:05.454393 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557314-jp4cs" event={"ID":"020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3","Type":"ContainerDied","Data":"f3620c809eb94a1db4a0d50d3e96b76b342766eaeab57ad94f8e04fa6cc1c6c9"} Mar 13 21:54:05 crc kubenswrapper[5029]: I0313 21:54:05.454804 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3620c809eb94a1db4a0d50d3e96b76b342766eaeab57ad94f8e04fa6cc1c6c9" Mar 13 21:54:05 crc kubenswrapper[5029]: I0313 21:54:05.454447 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557314-jp4cs" Mar 13 21:54:05 crc kubenswrapper[5029]: I0313 21:54:05.690385 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3-kube-api-access-j9db9" (OuterVolumeSpecName: "kube-api-access-j9db9") pod "020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3" (UID: "020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3"). InnerVolumeSpecName "kube-api-access-j9db9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:54:05 crc kubenswrapper[5029]: I0313 21:54:05.692840 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9db9\" (UniqueName: \"kubernetes.io/projected/020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3-kube-api-access-j9db9\") on node \"crc\" DevicePath \"\"" Mar 13 21:54:05 crc kubenswrapper[5029]: I0313 21:54:05.925647 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557308-g98jn"] Mar 13 21:54:05 crc kubenswrapper[5029]: I0313 21:54:05.934349 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557308-g98jn"] Mar 13 21:54:06 crc kubenswrapper[5029]: I0313 21:54:06.616933 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574a9994-4a63-44ff-98d5-dc305aba3cbf" path="/var/lib/kubelet/pods/574a9994-4a63-44ff-98d5-dc305aba3cbf/volumes" Mar 13 21:54:16 crc kubenswrapper[5029]: I0313 21:54:16.255879 5029 scope.go:117] "RemoveContainer" containerID="def0ddbfca733486a764e957fd716b5d75c94b70ec041aa8f46f38b44123e949" Mar 13 21:54:27 crc kubenswrapper[5029]: I0313 21:54:27.745103 5029 generic.go:334] "Generic (PLEG): container finished" podID="9a4b6e7f-443a-4b18-a1b4-84269b03935a" containerID="0825a54fceb1381a2822ccf6bcbc2466e86f3d832fa59d7877694185da8e1da5" exitCode=0 Mar 13 21:54:27 crc kubenswrapper[5029]: I0313 21:54:27.745236 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" event={"ID":"9a4b6e7f-443a-4b18-a1b4-84269b03935a","Type":"ContainerDied","Data":"0825a54fceb1381a2822ccf6bcbc2466e86f3d832fa59d7877694185da8e1da5"} Mar 13 21:54:28 crc kubenswrapper[5029]: I0313 21:54:28.899535 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:54:28 crc kubenswrapper[5029]: I0313 21:54:28.949682 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-g49x5"] Mar 13 21:54:28 crc kubenswrapper[5029]: I0313 21:54:28.960061 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-g49x5"] Mar 13 21:54:29 crc kubenswrapper[5029]: I0313 21:54:29.012096 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a4b6e7f-443a-4b18-a1b4-84269b03935a-host\") pod \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\" (UID: \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\") " Mar 13 21:54:29 crc kubenswrapper[5029]: I0313 21:54:29.012371 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqxst\" (UniqueName: \"kubernetes.io/projected/9a4b6e7f-443a-4b18-a1b4-84269b03935a-kube-api-access-rqxst\") pod \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\" (UID: \"9a4b6e7f-443a-4b18-a1b4-84269b03935a\") " Mar 13 21:54:29 crc kubenswrapper[5029]: I0313 21:54:29.013956 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a4b6e7f-443a-4b18-a1b4-84269b03935a-host" (OuterVolumeSpecName: "host") pod "9a4b6e7f-443a-4b18-a1b4-84269b03935a" (UID: "9a4b6e7f-443a-4b18-a1b4-84269b03935a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 21:54:29 crc kubenswrapper[5029]: I0313 21:54:29.034071 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a4b6e7f-443a-4b18-a1b4-84269b03935a-kube-api-access-rqxst" (OuterVolumeSpecName: "kube-api-access-rqxst") pod "9a4b6e7f-443a-4b18-a1b4-84269b03935a" (UID: "9a4b6e7f-443a-4b18-a1b4-84269b03935a"). InnerVolumeSpecName "kube-api-access-rqxst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:54:29 crc kubenswrapper[5029]: I0313 21:54:29.115360 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqxst\" (UniqueName: \"kubernetes.io/projected/9a4b6e7f-443a-4b18-a1b4-84269b03935a-kube-api-access-rqxst\") on node \"crc\" DevicePath \"\"" Mar 13 21:54:29 crc kubenswrapper[5029]: I0313 21:54:29.115396 5029 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a4b6e7f-443a-4b18-a1b4-84269b03935a-host\") on node \"crc\" DevicePath \"\"" Mar 13 21:54:29 crc kubenswrapper[5029]: I0313 21:54:29.766016 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb3e39ccb7decab44c355e4355d99a5591a63e142588b0a28c9c67207f868a4" Mar 13 21:54:29 crc kubenswrapper[5029]: I0313 21:54:29.766071 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-g49x5" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.214165 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-4f79h"] Mar 13 21:54:30 crc kubenswrapper[5029]: E0313 21:54:30.215087 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a4b6e7f-443a-4b18-a1b4-84269b03935a" containerName="container-00" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.215119 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a4b6e7f-443a-4b18-a1b4-84269b03935a" containerName="container-00" Mar 13 21:54:30 crc kubenswrapper[5029]: E0313 21:54:30.215172 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3" containerName="oc" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.215178 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3" containerName="oc" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.215365 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3" containerName="oc" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.215402 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a4b6e7f-443a-4b18-a1b4-84269b03935a" containerName="container-00" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.216173 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.345089 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-host\") pod \"crc-debug-4f79h\" (UID: \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\") " pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.345403 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxb2\" (UniqueName: \"kubernetes.io/projected/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-kube-api-access-flxb2\") pod \"crc-debug-4f79h\" (UID: \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\") " pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.447453 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxb2\" (UniqueName: \"kubernetes.io/projected/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-kube-api-access-flxb2\") pod \"crc-debug-4f79h\" (UID: \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\") " pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.447607 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-host\") pod \"crc-debug-4f79h\" (UID: \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\") " pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.447739 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-host\") pod \"crc-debug-4f79h\" (UID: \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\") " pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.475742 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxb2\" (UniqueName: \"kubernetes.io/projected/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-kube-api-access-flxb2\") pod \"crc-debug-4f79h\" (UID: \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\") " pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.536019 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.621918 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a4b6e7f-443a-4b18-a1b4-84269b03935a" path="/var/lib/kubelet/pods/9a4b6e7f-443a-4b18-a1b4-84269b03935a/volumes" Mar 13 21:54:30 crc kubenswrapper[5029]: I0313 21:54:30.777455 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/crc-debug-4f79h" event={"ID":"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a","Type":"ContainerStarted","Data":"657053d97b78c00dccf142691fc2e100f57546cf6955bb9c50510299116e6e8b"} Mar 13 21:54:31 crc kubenswrapper[5029]: I0313 21:54:31.789713 5029 generic.go:334] "Generic (PLEG): container finished" podID="d10e0e1f-98b3-46dc-a621-8eb3f0566d8a" containerID="7bc2f7e8a7a22e6e01c632936da08d91539c295cef68d9214e5d92bc8ceacee2" exitCode=0 Mar 13 21:54:31 crc kubenswrapper[5029]: I0313 21:54:31.790297 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/crc-debug-4f79h" event={"ID":"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a","Type":"ContainerDied","Data":"7bc2f7e8a7a22e6e01c632936da08d91539c295cef68d9214e5d92bc8ceacee2"} Mar 13 21:54:32 crc kubenswrapper[5029]: I0313 21:54:32.907310 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.018602 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxb2\" (UniqueName: \"kubernetes.io/projected/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-kube-api-access-flxb2\") pod \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\" (UID: \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\") " Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.018767 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-host\") pod \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\" (UID: \"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a\") " Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.018963 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-host" (OuterVolumeSpecName: "host") pod "d10e0e1f-98b3-46dc-a621-8eb3f0566d8a" (UID: "d10e0e1f-98b3-46dc-a621-8eb3f0566d8a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.019453 5029 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-host\") on node \"crc\" DevicePath \"\"" Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.025782 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-kube-api-access-flxb2" (OuterVolumeSpecName: "kube-api-access-flxb2") pod "d10e0e1f-98b3-46dc-a621-8eb3f0566d8a" (UID: "d10e0e1f-98b3-46dc-a621-8eb3f0566d8a"). InnerVolumeSpecName "kube-api-access-flxb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.122215 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flxb2\" (UniqueName: \"kubernetes.io/projected/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a-kube-api-access-flxb2\") on node \"crc\" DevicePath \"\"" Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.813553 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/crc-debug-4f79h" event={"ID":"d10e0e1f-98b3-46dc-a621-8eb3f0566d8a","Type":"ContainerDied","Data":"657053d97b78c00dccf142691fc2e100f57546cf6955bb9c50510299116e6e8b"} Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.813604 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657053d97b78c00dccf142691fc2e100f57546cf6955bb9c50510299116e6e8b" Mar 13 21:54:33 crc kubenswrapper[5029]: I0313 21:54:33.813636 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-4f79h" Mar 13 21:54:34 crc kubenswrapper[5029]: I0313 21:54:34.767351 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-4f79h"] Mar 13 21:54:34 crc kubenswrapper[5029]: I0313 21:54:34.775227 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-4f79h"] Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.020306 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-ddcb2"] Mar 13 21:54:36 crc kubenswrapper[5029]: E0313 21:54:36.020728 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10e0e1f-98b3-46dc-a621-8eb3f0566d8a" containerName="container-00" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.021103 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10e0e1f-98b3-46dc-a621-8eb3f0566d8a" containerName="container-00" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.021353 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10e0e1f-98b3-46dc-a621-8eb3f0566d8a" containerName="container-00" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.022140 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.196614 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xwj\" (UniqueName: \"kubernetes.io/projected/554b565a-7129-49ec-b97a-dc2580c883e0-kube-api-access-c8xwj\") pod \"crc-debug-ddcb2\" (UID: \"554b565a-7129-49ec-b97a-dc2580c883e0\") " pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.196768 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/554b565a-7129-49ec-b97a-dc2580c883e0-host\") pod \"crc-debug-ddcb2\" (UID: \"554b565a-7129-49ec-b97a-dc2580c883e0\") " pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.299879 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8xwj\" (UniqueName: \"kubernetes.io/projected/554b565a-7129-49ec-b97a-dc2580c883e0-kube-api-access-c8xwj\") pod \"crc-debug-ddcb2\" (UID: \"554b565a-7129-49ec-b97a-dc2580c883e0\") " pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.299958 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/554b565a-7129-49ec-b97a-dc2580c883e0-host\") pod \"crc-debug-ddcb2\" (UID: \"554b565a-7129-49ec-b97a-dc2580c883e0\") " pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.300214 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/554b565a-7129-49ec-b97a-dc2580c883e0-host\") pod \"crc-debug-ddcb2\" (UID: \"554b565a-7129-49ec-b97a-dc2580c883e0\") " pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.328156 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8xwj\" (UniqueName: \"kubernetes.io/projected/554b565a-7129-49ec-b97a-dc2580c883e0-kube-api-access-c8xwj\") pod \"crc-debug-ddcb2\" (UID: \"554b565a-7129-49ec-b97a-dc2580c883e0\") " pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.354212 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.624100 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10e0e1f-98b3-46dc-a621-8eb3f0566d8a" path="/var/lib/kubelet/pods/d10e0e1f-98b3-46dc-a621-8eb3f0566d8a/volumes" Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.846630 5029 generic.go:334] "Generic (PLEG): container finished" podID="554b565a-7129-49ec-b97a-dc2580c883e0" containerID="90371454cd12a7b8ee4b9aa0786a451147a620035e547f0a2beaa0a6b1075f7d" exitCode=0 Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.846676 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" event={"ID":"554b565a-7129-49ec-b97a-dc2580c883e0","Type":"ContainerDied","Data":"90371454cd12a7b8ee4b9aa0786a451147a620035e547f0a2beaa0a6b1075f7d"} Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.846706 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" event={"ID":"554b565a-7129-49ec-b97a-dc2580c883e0","Type":"ContainerStarted","Data":"2dc4f956dd3ef52eceb793793341747b9615116ff4b2760ec0160870d1fc1808"} Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.889172 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-ddcb2"] Mar 13 21:54:36 crc kubenswrapper[5029]: I0313 21:54:36.897698 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9kdhx/crc-debug-ddcb2"] Mar 13 21:54:37 crc kubenswrapper[5029]: I0313 21:54:37.966545 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.141732 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8xwj\" (UniqueName: \"kubernetes.io/projected/554b565a-7129-49ec-b97a-dc2580c883e0-kube-api-access-c8xwj\") pod \"554b565a-7129-49ec-b97a-dc2580c883e0\" (UID: \"554b565a-7129-49ec-b97a-dc2580c883e0\") " Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.142341 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/554b565a-7129-49ec-b97a-dc2580c883e0-host\") pod \"554b565a-7129-49ec-b97a-dc2580c883e0\" (UID: \"554b565a-7129-49ec-b97a-dc2580c883e0\") " Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.142518 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554b565a-7129-49ec-b97a-dc2580c883e0-host" (OuterVolumeSpecName: "host") pod "554b565a-7129-49ec-b97a-dc2580c883e0" (UID: "554b565a-7129-49ec-b97a-dc2580c883e0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.142951 5029 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/554b565a-7129-49ec-b97a-dc2580c883e0-host\") on node \"crc\" DevicePath \"\"" Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.157184 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/554b565a-7129-49ec-b97a-dc2580c883e0-kube-api-access-c8xwj" (OuterVolumeSpecName: "kube-api-access-c8xwj") pod "554b565a-7129-49ec-b97a-dc2580c883e0" (UID: "554b565a-7129-49ec-b97a-dc2580c883e0"). InnerVolumeSpecName "kube-api-access-c8xwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.245372 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8xwj\" (UniqueName: \"kubernetes.io/projected/554b565a-7129-49ec-b97a-dc2580c883e0-kube-api-access-c8xwj\") on node \"crc\" DevicePath \"\"" Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.612808 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="554b565a-7129-49ec-b97a-dc2580c883e0" path="/var/lib/kubelet/pods/554b565a-7129-49ec-b97a-dc2580c883e0/volumes" Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.871589 5029 scope.go:117] "RemoveContainer" containerID="90371454cd12a7b8ee4b9aa0786a451147a620035e547f0a2beaa0a6b1075f7d" Mar 13 21:54:38 crc kubenswrapper[5029]: I0313 21:54:38.871623 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/crc-debug-ddcb2" Mar 13 21:54:55 crc kubenswrapper[5029]: I0313 21:54:55.092678 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57db7d86f6-rjplz_441f7f6f-8c00-4ae7-a970-b199a5d94c55/barbican-api/0.log" Mar 13 21:54:55 crc kubenswrapper[5029]: I0313 21:54:55.243994 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57db7d86f6-rjplz_441f7f6f-8c00-4ae7-a970-b199a5d94c55/barbican-api-log/0.log" Mar 13 21:54:55 crc kubenswrapper[5029]: I0313 21:54:55.317649 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65fd679d74-klxb9_7975f817-5324-4ea2-9f48-7d83b39c2fab/barbican-keystone-listener/0.log" Mar 13 21:54:55 crc kubenswrapper[5029]: I0313 21:54:55.562584 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bd9d96d9f-z6km7_f2251506-8d45-43fb-b88b-3fc76a486e60/barbican-worker/0.log" Mar 13 21:54:55 crc kubenswrapper[5029]: I0313 21:54:55.600614 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bd9d96d9f-z6km7_f2251506-8d45-43fb-b88b-3fc76a486e60/barbican-worker-log/0.log" Mar 13 21:54:55 crc kubenswrapper[5029]: I0313 21:54:55.885756 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-rs9sr_0536889c-718f-4c69-a5ca-7428e7c351db/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:54:56 crc kubenswrapper[5029]: I0313 21:54:56.132859 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd727004-62dc-41e3-91b7-0fb181e9a44e/ceilometer-central-agent/0.log" Mar 13 21:54:56 crc kubenswrapper[5029]: I0313 21:54:56.192310 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd727004-62dc-41e3-91b7-0fb181e9a44e/proxy-httpd/0.log" Mar 13 21:54:56 crc kubenswrapper[5029]: I0313 21:54:56.230341 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd727004-62dc-41e3-91b7-0fb181e9a44e/ceilometer-notification-agent/0.log" Mar 13 21:54:56 crc kubenswrapper[5029]: I0313 21:54:56.258880 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65fd679d74-klxb9_7975f817-5324-4ea2-9f48-7d83b39c2fab/barbican-keystone-listener-log/0.log" Mar 13 21:54:56 crc kubenswrapper[5029]: I0313 21:54:56.414225 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd727004-62dc-41e3-91b7-0fb181e9a44e/sg-core/0.log" Mar 13 21:54:56 crc kubenswrapper[5029]: I0313 21:54:56.600635 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph_224a4b52-5147-4f0a-bda1-25eb237c0512/ceph/0.log" Mar 13 21:54:56 crc kubenswrapper[5029]: I0313 21:54:56.995819 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2ffb2426-fbfd-4856-a679-649eac82c558/cinder-api-log/0.log" Mar 13 21:54:57 crc kubenswrapper[5029]: I0313 21:54:57.069709 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2ffb2426-fbfd-4856-a679-649eac82c558/cinder-api/0.log" Mar 13 21:54:57 crc kubenswrapper[5029]: I0313 21:54:57.275291 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_85582134-3a4c-4127-8b04-5a0800fe403c/probe/0.log" Mar 13 21:54:57 crc kubenswrapper[5029]: I0313 21:54:57.389062 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54/cinder-scheduler/0.log" Mar 13 21:54:57 crc kubenswrapper[5029]: I0313 21:54:57.640168 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_eaf1d8b8-6dfa-4a48-a32f-afa94adf5e54/probe/0.log" Mar 13 21:54:57 crc kubenswrapper[5029]: I0313 21:54:57.978450 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_b5e3adcc-0538-4137-a9c4-09fb34e79fe9/probe/0.log" Mar 13 21:54:58 crc kubenswrapper[5029]: I0313 21:54:58.258893 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vzlm8_6566347c-a319-4ac9-a859-8cff6b7f47c0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:54:58 crc kubenswrapper[5029]: I0313 21:54:58.589349 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-wmh6d_65396eef-a783-4de6-9a3f-78632ce797c3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:54:58 crc kubenswrapper[5029]: I0313 21:54:58.835789 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_85582134-3a4c-4127-8b04-5a0800fe403c/cinder-backup/0.log" Mar 13 21:54:59 crc kubenswrapper[5029]: I0313 21:54:59.326413 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-t7swp_f47111bc-9b36-4714-b62d-cb3910f2445b/init/0.log" Mar 13 21:54:59 crc kubenswrapper[5029]: I0313 21:54:59.508525 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-t7swp_f47111bc-9b36-4714-b62d-cb3910f2445b/init/0.log" Mar 13 21:54:59 crc kubenswrapper[5029]: I0313 21:54:59.658209 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2hk5h_85414e93-71aa-49bf-b7dd-00b07149e16b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:54:59 crc kubenswrapper[5029]: I0313 21:54:59.756729 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-t7swp_f47111bc-9b36-4714-b62d-cb3910f2445b/dnsmasq-dns/0.log" Mar 13 21:54:59 crc kubenswrapper[5029]: I0313 21:54:59.878708 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_69b71985-d9f0-4b2c-85ea-b442ffb423c1/glance-log/0.log" Mar 13 21:54:59 crc kubenswrapper[5029]: I0313 21:54:59.952228 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_69b71985-d9f0-4b2c-85ea-b442ffb423c1/glance-httpd/0.log" Mar 13 21:55:00 crc kubenswrapper[5029]: I0313 21:55:00.133606 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6bdfe146-20b8-4a56-8a77-61affcc4e25f/glance-log/0.log" Mar 13 21:55:00 crc kubenswrapper[5029]: I0313 21:55:00.149031 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6bdfe146-20b8-4a56-8a77-61affcc4e25f/glance-httpd/0.log" Mar 13 21:55:00 crc kubenswrapper[5029]: I0313 21:55:00.495828 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-674bcdb76-8wx84_e88c424e-0503-40ac-9f24-5daa55912ff3/horizon/0.log" Mar 13 21:55:00 crc kubenswrapper[5029]: I0313 21:55:00.531539 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7rmx9_db4e8811-7f7d-4e55-adc2-d75f2c5c007a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:01 crc kubenswrapper[5029]: I0313 21:55:01.083686 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-bvdg5_6156a413-1c34-4e41-888b-7e0f9cd0dd61/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:01 crc kubenswrapper[5029]: I0313 21:55:01.309933 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-674bcdb76-8wx84_e88c424e-0503-40ac-9f24-5daa55912ff3/horizon-log/0.log" Mar 13 21:55:01 crc kubenswrapper[5029]: I0313 21:55:01.431962 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29557261-h5hn8_1d45d6d4-22ee-43ee-af88-5259795bbf30/keystone-cron/0.log" Mar 13 21:55:01 crc kubenswrapper[5029]: I0313 21:55:01.450208 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_b5e3adcc-0538-4137-a9c4-09fb34e79fe9/cinder-volume/0.log" Mar 13 21:55:01 crc kubenswrapper[5029]: I0313 21:55:01.657317 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1b04bebb-7126-472e-bfdc-f106f0190626/kube-state-metrics/0.log" Mar 13 21:55:01 crc kubenswrapper[5029]: I0313 21:55:01.837361 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-c2nn5_103d724b-82ad-4507-960c-6739fa89ab17/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:02 crc kubenswrapper[5029]: I0313 21:55:02.399423 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf/probe/0.log" Mar 13 21:55:02 crc kubenswrapper[5029]: I0313 21:55:02.489731 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_6b8f9967-671e-49b9-8e28-15c9b460086e/manila-api/0.log" Mar 13 21:55:02 crc kubenswrapper[5029]: I0313 21:55:02.519896 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8214b2f0-21e5-4df3-8f9f-56fd0c5bdbcf/manila-scheduler/0.log" Mar 13 21:55:02 crc kubenswrapper[5029]: I0313 21:55:02.755490 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1854a458-f657-4ddf-a316-e313a3403137/probe/0.log" Mar 13 21:55:03 crc kubenswrapper[5029]: I0313 21:55:03.153783 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1854a458-f657-4ddf-a316-e313a3403137/manila-share/0.log" Mar 13 21:55:03 crc kubenswrapper[5029]: I0313 21:55:03.220273 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_6b8f9967-671e-49b9-8e28-15c9b460086e/manila-api-log/0.log" Mar 13 21:55:03 crc kubenswrapper[5029]: I0313 21:55:03.792812 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-r2xfs_be4091de-1faa-4cda-b53b-22c6a3b67e74/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:04 crc kubenswrapper[5029]: I0313 21:55:04.323944 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d875c8b5-6tdfp_2049789d-643f-478a-8c68-c0ab07e8a3a3/neutron-httpd/0.log" Mar 13 21:55:05 crc kubenswrapper[5029]: I0313 21:55:05.175165 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d875c8b5-6tdfp_2049789d-643f-478a-8c68-c0ab07e8a3a3/neutron-api/0.log" Mar 13 21:55:05 crc kubenswrapper[5029]: I0313 21:55:05.891074 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7bb76fc874-xq9l8_07e8467c-2f07-49d5-8c20-a33c8f9d4291/keystone-api/0.log" Mar 13 21:55:06 crc kubenswrapper[5029]: I0313 21:55:06.173118 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_0c032309-b0f6-4917-8e27-6e39bc22f646/nova-cell0-conductor-conductor/0.log" Mar 13 21:55:06 crc kubenswrapper[5029]: I0313 21:55:06.612708 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_81a883af-abb4-4281-a082-af5d115e022c/nova-cell1-conductor-conductor/0.log" Mar 13 21:55:06 crc kubenswrapper[5029]: I0313 21:55:06.868175 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_78dac452-38e6-4307-b8ec-097bb5c99654/nova-cell1-novncproxy-novncproxy/0.log" Mar 13 21:55:07 crc kubenswrapper[5029]: I0313 21:55:07.204628 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-fkfg4_b58e81ba-bde3-4a48-b2b6-9e52514608eb/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:07 crc kubenswrapper[5029]: I0313 21:55:07.417712 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8407884a-22ef-4825-86e3-829a7235545f/nova-api-log/0.log" Mar 13 21:55:07 crc kubenswrapper[5029]: I0313 21:55:07.550005 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fa10f96c-8f94-48e8-8eb3-e0d7692e470e/nova-metadata-log/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.127437 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_97961996-b234-441c-ba7c-2c479dfae7f4/mysql-bootstrap/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.224832 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8407884a-22ef-4825-86e3-829a7235545f/nova-api-api/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.348540 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4267cfcc-949c-4fc5-8564-e11f5be38d85/nova-scheduler-scheduler/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.361386 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_97961996-b234-441c-ba7c-2c479dfae7f4/mysql-bootstrap/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.371756 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fa10f96c-8f94-48e8-8eb3-e0d7692e470e/nova-metadata-metadata/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.492260 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_97961996-b234-441c-ba7c-2c479dfae7f4/galera/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.647250 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe158656-b08f-4364-832e-f19c0f46d845/mysql-bootstrap/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.870787 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe158656-b08f-4364-832e-f19c0f46d845/mysql-bootstrap/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.948687 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_fa553312-0146-41c1-bc2e-9147af234ac8/openstackclient/0.log" Mar 13 21:55:08 crc kubenswrapper[5029]: I0313 21:55:08.977561 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe158656-b08f-4364-832e-f19c0f46d845/galera/0.log" Mar 13 21:55:09 crc kubenswrapper[5029]: I0313 21:55:09.499414 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fzdcm_0a248b29-b82a-41d1-aaa3-e7d12210ae6c/openstack-network-exporter/0.log" Mar 13 21:55:09 crc kubenswrapper[5029]: I0313 21:55:09.545611 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bj9ld_c4389075-f837-43e3-acc4-b577cdf1f05c/ovsdb-server-init/0.log" Mar 13 21:55:09 crc kubenswrapper[5029]: I0313 21:55:09.696354 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bj9ld_c4389075-f837-43e3-acc4-b577cdf1f05c/ovsdb-server-init/0.log" Mar 13 21:55:09 crc kubenswrapper[5029]: I0313 21:55:09.743339 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bj9ld_c4389075-f837-43e3-acc4-b577cdf1f05c/ovs-vswitchd/0.log" Mar 13 21:55:09 crc kubenswrapper[5029]: I0313 21:55:09.778309 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bj9ld_c4389075-f837-43e3-acc4-b577cdf1f05c/ovsdb-server/0.log" Mar 13 21:55:09 crc kubenswrapper[5029]: I0313 21:55:09.955351 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xvrv7_09599f34-8760-4612-9d50-925aeb8134b4/ovn-controller/0.log" Mar 13 21:55:10 crc kubenswrapper[5029]: I0313 21:55:10.094184 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-q87rb_f8235dbd-1bae-4cce-a053-03f7c07d6ce7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:10 crc kubenswrapper[5029]: I0313 21:55:10.229725 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a6777edf-388f-48a7-92aa-eff24b6b2bfd/openstack-network-exporter/0.log" Mar 13 21:55:10 crc kubenswrapper[5029]: I0313 21:55:10.370172 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a6777edf-388f-48a7-92aa-eff24b6b2bfd/ovn-northd/0.log" Mar 13 21:55:10 crc kubenswrapper[5029]: I0313 21:55:10.488084 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_044b4140-6d50-42d6-893a-2f35ff0bc7b3/openstack-network-exporter/0.log" Mar 13 21:55:10 crc kubenswrapper[5029]: I0313 21:55:10.681550 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_044b4140-6d50-42d6-893a-2f35ff0bc7b3/ovsdbserver-nb/0.log" Mar 13 21:55:10 crc kubenswrapper[5029]: I0313 21:55:10.768922 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_119f4c09-be62-4769-a9e5-1af49cca26c6/ovsdbserver-sb/0.log" Mar 13 21:55:10 crc kubenswrapper[5029]: I0313 21:55:10.821365 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_119f4c09-be62-4769-a9e5-1af49cca26c6/openstack-network-exporter/0.log" Mar 13 21:55:11 crc kubenswrapper[5029]: I0313 21:55:11.242428 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b09567a2-ae01-47b2-98be-4e4b9ee54a66/setup-container/0.log" Mar 13 21:55:11 crc kubenswrapper[5029]: I0313 21:55:11.449834 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b09567a2-ae01-47b2-98be-4e4b9ee54a66/setup-container/0.log" Mar 13 21:55:11 crc kubenswrapper[5029]: I0313 21:55:11.466026 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b09567a2-ae01-47b2-98be-4e4b9ee54a66/rabbitmq/0.log" Mar 13 21:55:11 crc kubenswrapper[5029]: I0313 21:55:11.532700 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c9b98d8-kzhp5_d747ae9b-00da-450d-a0cf-cd3a198cad72/placement-api/0.log" Mar 13 21:55:11 crc kubenswrapper[5029]: I0313 21:55:11.629221 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c9b98d8-kzhp5_d747ae9b-00da-450d-a0cf-cd3a198cad72/placement-log/0.log" Mar 13 21:55:11 crc kubenswrapper[5029]: I0313 21:55:11.694995 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_473790b1-7b66-4983-89fa-22e81a350616/setup-container/0.log" Mar 13 21:55:11 crc kubenswrapper[5029]: I0313 21:55:11.975359 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_473790b1-7b66-4983-89fa-22e81a350616/rabbitmq/0.log" Mar 13 21:55:11 crc kubenswrapper[5029]: I0313 21:55:11.996290 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_473790b1-7b66-4983-89fa-22e81a350616/setup-container/0.log" Mar 13 21:55:12 crc kubenswrapper[5029]: I0313 21:55:12.010033 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-22m29_e2ccac21-2f90-4c85-aaf5-edd2adb44957/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:12 crc kubenswrapper[5029]: I0313 21:55:12.280094 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-56klv_7914fbef-d24e-4d69-aa5d-1bec7c231341/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:12 crc kubenswrapper[5029]: I0313 21:55:12.364655 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-cf94w_ae031e82-8607-4f07-a080-d259c4dd17e2/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:12 crc kubenswrapper[5029]: I0313 21:55:12.503311 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5qqtm_b37a021c-5749-4a8c-b0ca-22cc684d3c78/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:12 crc kubenswrapper[5029]: I0313 21:55:12.628199 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cwkds_c718816f-d85a-4401-ac91-2365bffde224/ssh-known-hosts-edpm-deployment/0.log" Mar 13 21:55:12 crc kubenswrapper[5029]: I0313 21:55:12.908436 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-94bcffbb7-lqxc5_d145e01e-08f4-42f3-b239-86e0abcb2ec1/proxy-server/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.036340 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-94bcffbb7-lqxc5_d145e01e-08f4-42f3-b239-86e0abcb2ec1/proxy-httpd/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.235110 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2phnh_68daffaa-8e1e-4af0-99e1-6fe5b9aa04b6/swift-ring-rebalance/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.425055 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/account-auditor/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.436669 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/account-reaper/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.602352 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/account-replicator/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.656486 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/container-auditor/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.703413 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/account-server/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.725988 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/container-replicator/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.859042 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/container-server/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.933377 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/container-updater/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.962408 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/object-expirer/0.log" Mar 13 21:55:13 crc kubenswrapper[5029]: I0313 21:55:13.995993 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/object-auditor/0.log" Mar 13 21:55:14 crc kubenswrapper[5029]: I0313 21:55:14.127730 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/object-server/0.log" Mar 13 21:55:14 crc kubenswrapper[5029]: I0313 21:55:14.150328 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/object-replicator/0.log" Mar 13 21:55:14 crc kubenswrapper[5029]: I0313 21:55:14.295258 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/rsync/0.log" Mar 13 21:55:14 crc kubenswrapper[5029]: I0313 21:55:14.296292 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/object-updater/0.log" Mar 13 21:55:14 crc kubenswrapper[5029]: I0313 21:55:14.406052 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81a1e5be-bbdf-4a80-a209-3acb956f5c86/swift-recon-cron/0.log" Mar 13 21:55:14 crc kubenswrapper[5029]: I0313 21:55:14.644252 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hpdgp_ee60ebd2-90a0-4b71-96e5-01348f8c7ba7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:14 crc kubenswrapper[5029]: I0313 21:55:14.782255 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ac9d86b5-6cef-43ea-90c2-3aebba7f6ced/tempest-tests-tempest-tests-runner/0.log" Mar 13 21:55:14 crc kubenswrapper[5029]: I0313 21:55:14.871586 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2mzpd_cafa7079-daee-42e6-818b-32277058379d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 13 21:55:29 crc kubenswrapper[5029]: I0313 21:55:29.640038 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_10b7646b-bd89-43c4-8fa2-2d28c1327c65/memcached/0.log" Mar 13 21:55:50 crc kubenswrapper[5029]: I0313 21:55:50.402970 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4_00c86530-24c9-45c2-857a-44a29dba7ec3/util/0.log" Mar 13 21:55:50 crc kubenswrapper[5029]: I0313 21:55:50.735431 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4_00c86530-24c9-45c2-857a-44a29dba7ec3/pull/0.log" Mar 13 21:55:50 crc kubenswrapper[5029]: I0313 21:55:50.742890 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4_00c86530-24c9-45c2-857a-44a29dba7ec3/pull/0.log" Mar 13 21:55:50 crc kubenswrapper[5029]: I0313 21:55:50.745193 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4_00c86530-24c9-45c2-857a-44a29dba7ec3/util/0.log" Mar 13 21:55:50 crc kubenswrapper[5029]: I0313 21:55:50.994370 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4_00c86530-24c9-45c2-857a-44a29dba7ec3/util/0.log" Mar 13 21:55:51 crc kubenswrapper[5029]: I0313 21:55:51.003291 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4_00c86530-24c9-45c2-857a-44a29dba7ec3/pull/0.log" Mar 13 21:55:51 crc kubenswrapper[5029]: I0313 21:55:51.051387 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a89c8ac2517d4a03b907c81221df4e2b81e414ad200438c5b5185b1fef9zpj4_00c86530-24c9-45c2-857a-44a29dba7ec3/extract/0.log" Mar 13 21:55:51 crc kubenswrapper[5029]: I0313 21:55:51.448810 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-d47688694-cmqrn_5af430c9-929c-4f4b-8a2e-0b346433c966/manager/0.log" Mar 13 21:55:51 crc kubenswrapper[5029]: I0313 21:55:51.494032 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-djjwn_9fae77a6-7657-435b-9eaa-46738bd3adff/manager/0.log" Mar 13 21:55:51 crc kubenswrapper[5029]: I0313 21:55:51.822190 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-8nx6k_8572f8c5-5098-41a3-8596-e93818c51912/manager/0.log" Mar 13 21:55:51 crc kubenswrapper[5029]: I0313 21:55:51.879893 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-jtfsz_cb6725e8-bfb1-4ae6-884c-d70e86c2e268/manager/0.log" Mar 13 21:55:52 crc kubenswrapper[5029]: I0313 21:55:52.099131 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-pthcv_c78e7c55-5a08-44a3-9ab9-8229d3b63c95/manager/0.log" Mar 13 21:55:52 crc kubenswrapper[5029]: I0313 21:55:52.476149 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bc894d9b-qvzqz_03ada4f5-407f-4ce4-8cdd-b91ba50d6e24/manager/0.log" Mar 13 21:55:52 crc kubenswrapper[5029]: I0313 21:55:52.744073 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54dc5b8f8d-jwm4t_9885322a-6140-443a-9c3a-d21a4674c0f9/manager/0.log" Mar 13 21:55:52 crc kubenswrapper[5029]: I0313 21:55:52.836463 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-gzknz_e5ca1347-56a7-4fea-8256-0728bc438b76/manager/0.log" Mar 13 21:55:53 crc kubenswrapper[5029]: I0313 21:55:53.087044 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-57b484b4df-dqb4l_0ea96653-f3ad-443c-85cb-27806cc8d02f/manager/0.log" Mar 13 21:55:53 crc kubenswrapper[5029]: I0313 21:55:53.294832 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5b6b6b4c9f-5stmj_465d67e8-1ca2-4c48-9ea6-5a46f41e4333/manager/0.log" Mar 13 21:55:53 crc kubenswrapper[5029]: I0313 21:55:53.327269 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-wss56_62985a1a-96c3-413d-b4ba-1e30082b4252/manager/0.log" Mar 13 21:55:53 crc kubenswrapper[5029]: I0313 21:55:53.413370 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-wkr5q_0bbae089-e35f-4e2a-98f9-3348cb910e91/manager/0.log" Mar 13 21:55:53 crc kubenswrapper[5029]: I0313 21:55:53.930700 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-2zjps_60caa364-7d62-4d19-8de1-6b231b90adb7/manager/0.log" Mar 13 21:55:53 crc kubenswrapper[5029]: I0313 21:55:53.937954 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f84474648-strvq_246360b4-7120-4eb9-b734-cfd22fb35bc6/manager/0.log" Mar 13 21:55:54 crc kubenswrapper[5029]: I0313 21:55:54.113172 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b75gm8n_5f05cebc-30a2-43ca-8ecf-31853a8f2600/manager/0.log" Mar 13 21:55:54 crc kubenswrapper[5029]: I0313 21:55:54.383015 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5c46d6fb64-hjq8h_f889ccf3-c017-4e72-8f23-d5355cbade76/operator/0.log" Mar 13 21:55:54 crc kubenswrapper[5029]: I0313 21:55:54.502330 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gd4fj_d94530ec-fdc6-4023-bca6-b8b62ed8f029/registry-server/0.log" Mar 13 21:55:54 crc kubenswrapper[5029]: I0313 21:55:54.659689 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-r6d75_54ccdb4e-12ea-481d-b139-21820e7cb430/manager/0.log" Mar 13 21:55:54 crc kubenswrapper[5029]: I0313 21:55:54.750827 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-h2xd9_b7d71625-72b5-4359-92ed-1931a3fe6b96/manager/0.log" Mar 13 21:55:54 crc kubenswrapper[5029]: I0313 21:55:54.962183 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lm87s_4730a688-7219-434b-8ab5-88c3023144e1/operator/0.log" Mar 13 21:55:55 crc kubenswrapper[5029]: I0313 21:55:55.124166 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7f9cc5dd44-p2j7s_2ec9fbff-bc5a-402c-9af7-f5cb8febf410/manager/0.log" Mar 13 21:55:55 crc kubenswrapper[5029]: I0313 21:55:55.297958 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6854b8b9d9-przwp_ed2536ff-a21c-4134-9acc-6d6dcc2243e4/manager/0.log" Mar 13 21:55:55 crc kubenswrapper[5029]: I0313 21:55:55.449221 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-4rbtk_df55c0eb-db5c-48b7-9b8b-997253cb8510/manager/0.log" Mar 13 21:55:55 crc kubenswrapper[5029]: I0313 21:55:55.523019 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5698bc49b8-w5dsp_c2af04e3-221f-45fc-8a9f-c0f413b9b95c/manager/0.log" Mar 13 21:55:55 crc kubenswrapper[5029]: I0313 21:55:55.541387 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-4ckwc_1b78339c-69bb-4905-af68-29313b2e2227/manager/0.log" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.152895 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557316-xwt2b"] Mar 13 21:56:00 crc kubenswrapper[5029]: E0313 21:56:00.154184 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554b565a-7129-49ec-b97a-dc2580c883e0" containerName="container-00" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.154202 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="554b565a-7129-49ec-b97a-dc2580c883e0" containerName="container-00" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.154428 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="554b565a-7129-49ec-b97a-dc2580c883e0" containerName="container-00" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.155364 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.158343 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.158510 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.158601 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.165292 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557316-xwt2b"] Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.303361 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9w6l\" (UniqueName: \"kubernetes.io/projected/6731e82b-37f1-4810-90c4-fa12858b47f1-kube-api-access-r9w6l\") pod \"auto-csr-approver-29557316-xwt2b\" (UID: \"6731e82b-37f1-4810-90c4-fa12858b47f1\") " pod="openshift-infra/auto-csr-approver-29557316-xwt2b" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.405230 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9w6l\" (UniqueName: \"kubernetes.io/projected/6731e82b-37f1-4810-90c4-fa12858b47f1-kube-api-access-r9w6l\") pod \"auto-csr-approver-29557316-xwt2b\" (UID: \"6731e82b-37f1-4810-90c4-fa12858b47f1\") " pod="openshift-infra/auto-csr-approver-29557316-xwt2b" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.439603 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9w6l\" (UniqueName: \"kubernetes.io/projected/6731e82b-37f1-4810-90c4-fa12858b47f1-kube-api-access-r9w6l\") pod \"auto-csr-approver-29557316-xwt2b\" (UID: \"6731e82b-37f1-4810-90c4-fa12858b47f1\") " pod="openshift-infra/auto-csr-approver-29557316-xwt2b" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.478754 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" Mar 13 21:56:00 crc kubenswrapper[5029]: I0313 21:56:00.987600 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557316-xwt2b"] Mar 13 21:56:01 crc kubenswrapper[5029]: I0313 21:56:01.796491 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" event={"ID":"6731e82b-37f1-4810-90c4-fa12858b47f1","Type":"ContainerStarted","Data":"ad110b5b8f572c65af97723e5cbb133483b9e22db03f9f113fc51eaa4a3b2f80"} Mar 13 21:56:01 crc kubenswrapper[5029]: I0313 21:56:01.950969 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:56:01 crc kubenswrapper[5029]: I0313 21:56:01.951383 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:56:02 crc kubenswrapper[5029]: I0313 21:56:02.810584 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" event={"ID":"6731e82b-37f1-4810-90c4-fa12858b47f1","Type":"ContainerStarted","Data":"8475699eabf49a57c9ae962f39ed0a3ddfb8e96998bf0a13bb06f64a3797ad35"} Mar 13 21:56:02 crc kubenswrapper[5029]: I0313 21:56:02.827925 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" podStartSLOduration=1.996243642 podStartE2EDuration="2.827908368s" podCreationTimestamp="2026-03-13 21:56:00 +0000 UTC" firstStartedPulling="2026-03-13 21:56:01.011059313 +0000 UTC m=+5321.027141706" lastFinishedPulling="2026-03-13 21:56:01.842724039 +0000 UTC m=+5321.858806432" observedRunningTime="2026-03-13 21:56:02.824985077 +0000 UTC m=+5322.841067480" watchObservedRunningTime="2026-03-13 21:56:02.827908368 +0000 UTC m=+5322.843990771" Mar 13 21:56:03 crc kubenswrapper[5029]: I0313 21:56:03.850978 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" event={"ID":"6731e82b-37f1-4810-90c4-fa12858b47f1","Type":"ContainerDied","Data":"8475699eabf49a57c9ae962f39ed0a3ddfb8e96998bf0a13bb06f64a3797ad35"} Mar 13 21:56:03 crc kubenswrapper[5029]: I0313 21:56:03.851449 5029 generic.go:334] "Generic (PLEG): container finished" podID="6731e82b-37f1-4810-90c4-fa12858b47f1" containerID="8475699eabf49a57c9ae962f39ed0a3ddfb8e96998bf0a13bb06f64a3797ad35" exitCode=0 Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.333646 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.427926 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9w6l\" (UniqueName: \"kubernetes.io/projected/6731e82b-37f1-4810-90c4-fa12858b47f1-kube-api-access-r9w6l\") pod \"6731e82b-37f1-4810-90c4-fa12858b47f1\" (UID: \"6731e82b-37f1-4810-90c4-fa12858b47f1\") " Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.436675 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731e82b-37f1-4810-90c4-fa12858b47f1-kube-api-access-r9w6l" (OuterVolumeSpecName: "kube-api-access-r9w6l") pod "6731e82b-37f1-4810-90c4-fa12858b47f1" (UID: "6731e82b-37f1-4810-90c4-fa12858b47f1"). InnerVolumeSpecName "kube-api-access-r9w6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.532167 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9w6l\" (UniqueName: \"kubernetes.io/projected/6731e82b-37f1-4810-90c4-fa12858b47f1-kube-api-access-r9w6l\") on node \"crc\" DevicePath \"\"" Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.872917 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" event={"ID":"6731e82b-37f1-4810-90c4-fa12858b47f1","Type":"ContainerDied","Data":"ad110b5b8f572c65af97723e5cbb133483b9e22db03f9f113fc51eaa4a3b2f80"} Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.872969 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad110b5b8f572c65af97723e5cbb133483b9e22db03f9f113fc51eaa4a3b2f80" Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.873010 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557316-xwt2b" Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.912607 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557310-l649b"] Mar 13 21:56:05 crc kubenswrapper[5029]: I0313 21:56:05.926779 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557310-l649b"] Mar 13 21:56:06 crc kubenswrapper[5029]: I0313 21:56:06.611898 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431c15e2-dfc7-4e43-bbfe-7d5f8cccb700" path="/var/lib/kubelet/pods/431c15e2-dfc7-4e43-bbfe-7d5f8cccb700/volumes" Mar 13 21:56:16 crc kubenswrapper[5029]: I0313 21:56:16.424470 5029 scope.go:117] "RemoveContainer" containerID="f6f47774bb05f5cbbede1c4bbbbf28d873816c695238b5b0cc306e185c1e66ec" Mar 13 21:56:21 crc kubenswrapper[5029]: I0313 21:56:21.395790 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-h2sxq_a0d54d7e-5ec4-46ce-b90e-96e976596cc3/control-plane-machine-set-operator/0.log" Mar 13 21:56:21 crc kubenswrapper[5029]: I0313 21:56:21.575349 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mmwnc_7e26e65c-4cb6-4094-b92b-9b4e0b36253b/kube-rbac-proxy/0.log" Mar 13 21:56:21 crc kubenswrapper[5029]: I0313 21:56:21.631108 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mmwnc_7e26e65c-4cb6-4094-b92b-9b4e0b36253b/machine-api-operator/0.log" Mar 13 21:56:31 crc kubenswrapper[5029]: I0313 21:56:31.949942 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:56:31 crc kubenswrapper[5029]: I0313 21:56:31.950621 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:56:36 crc kubenswrapper[5029]: I0313 21:56:36.492497 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xgksp_3cf03391-9a73-41f5-96dd-4c3288ef36fc/cert-manager-controller/0.log" Mar 13 21:56:36 crc kubenswrapper[5029]: I0313 21:56:36.684168 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-wn4ds_e348abbe-f890-45ea-906e-28f15df7c05a/cert-manager-cainjector/0.log" Mar 13 21:56:36 crc kubenswrapper[5029]: I0313 21:56:36.697130 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5snlv_916b635c-3f33-4546-80e7-33e61e2bd39c/cert-manager-webhook/0.log" Mar 13 21:56:52 crc kubenswrapper[5029]: I0313 21:56:52.941369 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-6rvrn_ad29b302-2f20-4bf5-bd5f-c40ac11bebf4/nmstate-console-plugin/0.log" Mar 13 21:56:52 crc kubenswrapper[5029]: I0313 21:56:52.981925 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kfjhn_10c1789d-86d9-4de6-a518-80129bc65d08/nmstate-handler/0.log" Mar 13 21:56:53 crc kubenswrapper[5029]: I0313 21:56:53.119676 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-bs5gg_d5ec24be-1999-4337-961a-aa0fe51a903a/kube-rbac-proxy/0.log" Mar 13 21:56:53 crc kubenswrapper[5029]: I0313 21:56:53.155796 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-bs5gg_d5ec24be-1999-4337-961a-aa0fe51a903a/nmstate-metrics/0.log" Mar 13 21:56:53 crc kubenswrapper[5029]: I0313 21:56:53.356610 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-f6kxx_e8119630-7aa1-4ab3-a38c-de26de2185d3/nmstate-operator/0.log" Mar 13 21:56:53 crc kubenswrapper[5029]: I0313 21:56:53.390730 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-nxc2w_2ea9a98d-15cc-4d2f-9d80-1c7b4ab12488/nmstate-webhook/0.log" Mar 13 21:57:01 crc kubenswrapper[5029]: I0313 21:57:01.950519 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 21:57:01 crc kubenswrapper[5029]: I0313 21:57:01.951279 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 21:57:01 crc kubenswrapper[5029]: I0313 21:57:01.951350 5029 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28st2" Mar 13 21:57:01 crc kubenswrapper[5029]: I0313 21:57:01.952327 5029 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057"} pod="openshift-machine-config-operator/machine-config-daemon-28st2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 21:57:01 crc kubenswrapper[5029]: I0313 21:57:01.952401 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" containerID="cri-o://87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" gracePeriod=600 Mar 13 21:57:02 crc kubenswrapper[5029]: E0313 21:57:02.191458 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:57:02 crc kubenswrapper[5029]: I0313 21:57:02.475125 5029 generic.go:334] "Generic (PLEG): container finished" podID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" exitCode=0 Mar 13 21:57:02 crc kubenswrapper[5029]: I0313 21:57:02.475205 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerDied","Data":"87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057"} Mar 13 21:57:02 crc kubenswrapper[5029]: I0313 21:57:02.475265 5029 scope.go:117] "RemoveContainer" containerID="2afe5730040ef35af35e0a35fbe930a07c61a8afc8eb6ad9a3d6ef4e635dfa99" Mar 13 21:57:02 crc kubenswrapper[5029]: I0313 21:57:02.476559 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:57:02 crc kubenswrapper[5029]: E0313 21:57:02.477091 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:57:13 crc kubenswrapper[5029]: I0313 21:57:13.600212 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:57:13 crc kubenswrapper[5029]: E0313 21:57:13.601424 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:57:24 crc kubenswrapper[5029]: I0313 21:57:24.378897 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-tlxnq_148d0749-47d0-44a8-b445-9464b9370508/kube-rbac-proxy/0.log" Mar 13 21:57:24 crc kubenswrapper[5029]: I0313 21:57:24.532757 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-tlxnq_148d0749-47d0-44a8-b445-9464b9370508/controller/0.log" Mar 13 21:57:24 crc kubenswrapper[5029]: I0313 21:57:24.618637 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-frr-files/0.log" Mar 13 21:57:24 crc kubenswrapper[5029]: I0313 21:57:24.813320 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-frr-files/0.log" Mar 13 21:57:24 crc kubenswrapper[5029]: I0313 21:57:24.818412 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-metrics/0.log" Mar 13 21:57:24 crc kubenswrapper[5029]: I0313 21:57:24.830049 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-reloader/0.log" Mar 13 21:57:24 crc kubenswrapper[5029]: I0313 21:57:24.848136 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-reloader/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.059443 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-frr-files/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.066872 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-metrics/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.069085 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-metrics/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.073427 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-reloader/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.254072 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-reloader/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.267024 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-metrics/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.267923 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/cp-frr-files/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.288767 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/controller/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.492945 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/frr-metrics/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.504132 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/kube-rbac-proxy-frr/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.504160 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/kube-rbac-proxy/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.736771 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/reloader/0.log" Mar 13 21:57:25 crc kubenswrapper[5029]: I0313 21:57:25.788660 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-mrnn8_f43167b4-ff02-4f87-98af-4f7e445e4620/frr-k8s-webhook-server/0.log" Mar 13 21:57:26 crc kubenswrapper[5029]: I0313 21:57:26.022371 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b55d4cdb9-s58fm_3b95a923-5775-4f8c-95aa-be566bc0d78c/manager/0.log" Mar 13 21:57:26 crc kubenswrapper[5029]: I0313 21:57:26.170950 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-94f7c7558-44tlt_30e521ab-6234-4e1a-9036-7c709e06c9b1/webhook-server/0.log" Mar 13 21:57:26 crc kubenswrapper[5029]: I0313 21:57:26.251901 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tp4f4_4ae672f1-e9e8-4adc-8b6d-a0005d030621/kube-rbac-proxy/0.log" Mar 13 21:57:26 crc kubenswrapper[5029]: I0313 21:57:26.970407 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tp4f4_4ae672f1-e9e8-4adc-8b6d-a0005d030621/speaker/0.log" Mar 13 21:57:27 crc kubenswrapper[5029]: I0313 21:57:27.581319 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f26jf_62643dbe-126d-43e2-a08e-483ca7864ea6/frr/0.log" Mar 13 21:57:27 crc kubenswrapper[5029]: I0313 21:57:27.600142 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:57:27 crc kubenswrapper[5029]: E0313 21:57:27.600413 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:57:30 crc kubenswrapper[5029]: I0313 21:57:30.776437 5029 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="97961996-b234-441c-ba7c-2c479dfae7f4" containerName="galera" probeResult="failure" output="command timed out" Mar 13 21:57:38 crc kubenswrapper[5029]: I0313 21:57:38.600461 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:57:38 crc kubenswrapper[5029]: E0313 21:57:38.601527 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.184053 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cr9wg"] Mar 13 21:57:44 crc kubenswrapper[5029]: E0313 21:57:44.186776 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6731e82b-37f1-4810-90c4-fa12858b47f1" containerName="oc" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.186830 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="6731e82b-37f1-4810-90c4-fa12858b47f1" containerName="oc" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.187604 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="6731e82b-37f1-4810-90c4-fa12858b47f1" containerName="oc" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.189820 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.202136 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cr9wg"] Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.246292 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-catalog-content\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.246578 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-utilities\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.246759 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drk84\" (UniqueName: \"kubernetes.io/projected/2016b030-5a23-4245-9cff-658274b6f93c-kube-api-access-drk84\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.348800 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-utilities\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.348885 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drk84\" (UniqueName: \"kubernetes.io/projected/2016b030-5a23-4245-9cff-658274b6f93c-kube-api-access-drk84\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.348929 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-catalog-content\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.349504 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-catalog-content\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.349736 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-utilities\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.375470 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drk84\" (UniqueName: \"kubernetes.io/projected/2016b030-5a23-4245-9cff-658274b6f93c-kube-api-access-drk84\") pod \"certified-operators-cr9wg\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.480285 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59_3696e6e7-3920-42fc-8846-f47bfe1ff906/util/0.log" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.518435 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.807718 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59_3696e6e7-3920-42fc-8846-f47bfe1ff906/pull/0.log" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.902582 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59_3696e6e7-3920-42fc-8846-f47bfe1ff906/pull/0.log" Mar 13 21:57:44 crc kubenswrapper[5029]: I0313 21:57:44.974513 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59_3696e6e7-3920-42fc-8846-f47bfe1ff906/util/0.log" Mar 13 21:57:45 crc kubenswrapper[5029]: I0313 21:57:45.682529 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59_3696e6e7-3920-42fc-8846-f47bfe1ff906/pull/0.log" Mar 13 21:57:45 crc kubenswrapper[5029]: I0313 21:57:45.750330 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cr9wg"] Mar 13 21:57:45 crc kubenswrapper[5029]: I0313 21:57:45.778629 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59_3696e6e7-3920-42fc-8846-f47bfe1ff906/extract/0.log" Mar 13 21:57:45 crc kubenswrapper[5029]: I0313 21:57:45.779382 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g6g59_3696e6e7-3920-42fc-8846-f47bfe1ff906/util/0.log" Mar 13 21:57:45 crc kubenswrapper[5029]: I0313 21:57:45.915659 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d_53744549-0d0b-409a-a51c-67a6f8df65d5/util/0.log" Mar 13 21:57:45 crc kubenswrapper[5029]: I0313 21:57:45.940255 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr9wg" event={"ID":"2016b030-5a23-4245-9cff-658274b6f93c","Type":"ContainerStarted","Data":"cb6f6a7cec65c17ba2ac1593b059650c7b9393d696d57def9eec53ef82aefd50"} Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.114272 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d_53744549-0d0b-409a-a51c-67a6f8df65d5/pull/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.150654 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d_53744549-0d0b-409a-a51c-67a6f8df65d5/pull/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.177968 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d_53744549-0d0b-409a-a51c-67a6f8df65d5/util/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.337262 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d_53744549-0d0b-409a-a51c-67a6f8df65d5/util/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.383451 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d_53744549-0d0b-409a-a51c-67a6f8df65d5/extract/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.397377 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rfb8d_53744549-0d0b-409a-a51c-67a6f8df65d5/pull/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.534560 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6bpnw_15fd0736-9d55-436e-ac0d-de5e11d0a0b4/extract-utilities/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.733075 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6bpnw_15fd0736-9d55-436e-ac0d-de5e11d0a0b4/extract-content/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.740358 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6bpnw_15fd0736-9d55-436e-ac0d-de5e11d0a0b4/extract-content/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.741124 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6bpnw_15fd0736-9d55-436e-ac0d-de5e11d0a0b4/extract-utilities/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.937582 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6bpnw_15fd0736-9d55-436e-ac0d-de5e11d0a0b4/extract-utilities/0.log" Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.950590 5029 generic.go:334] "Generic (PLEG): container finished" podID="2016b030-5a23-4245-9cff-658274b6f93c" containerID="51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36" exitCode=0 Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.950631 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr9wg" event={"ID":"2016b030-5a23-4245-9cff-658274b6f93c","Type":"ContainerDied","Data":"51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36"} Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.953970 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 21:57:46 crc kubenswrapper[5029]: I0313 21:57:46.996220 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6bpnw_15fd0736-9d55-436e-ac0d-de5e11d0a0b4/extract-content/0.log" Mar 13 21:57:47 crc kubenswrapper[5029]: I0313 21:57:47.183278 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rftp4_b4e88689-9871-4cd2-9d9e-23b3487a7957/extract-utilities/0.log" Mar 13 21:57:47 crc kubenswrapper[5029]: I0313 21:57:47.419196 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rftp4_b4e88689-9871-4cd2-9d9e-23b3487a7957/extract-utilities/0.log" Mar 13 21:57:47 crc kubenswrapper[5029]: I0313 21:57:47.529231 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rftp4_b4e88689-9871-4cd2-9d9e-23b3487a7957/extract-content/0.log" Mar 13 21:57:47 crc kubenswrapper[5029]: I0313 21:57:47.570379 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rftp4_b4e88689-9871-4cd2-9d9e-23b3487a7957/extract-content/0.log" Mar 13 21:57:47 crc kubenswrapper[5029]: I0313 21:57:47.775582 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rftp4_b4e88689-9871-4cd2-9d9e-23b3487a7957/extract-utilities/0.log" Mar 13 21:57:47 crc kubenswrapper[5029]: I0313 21:57:47.800688 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6bpnw_15fd0736-9d55-436e-ac0d-de5e11d0a0b4/registry-server/0.log" Mar 13 21:57:47 crc kubenswrapper[5029]: I0313 21:57:47.857784 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rftp4_b4e88689-9871-4cd2-9d9e-23b3487a7957/extract-content/0.log" Mar 13 21:57:47 crc kubenswrapper[5029]: I0313 21:57:47.963582 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr9wg" event={"ID":"2016b030-5a23-4245-9cff-658274b6f93c","Type":"ContainerStarted","Data":"52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95"} Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.096363 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vg7pb_e6a8cc11-fafe-4b5a-a194-61c8680f0585/marketplace-operator/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.187832 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rftp4_b4e88689-9871-4cd2-9d9e-23b3487a7957/registry-server/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.312904 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xccj6_f796f770-1b16-4b7a-a52a-bdebed36f36b/extract-utilities/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.613005 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xccj6_f796f770-1b16-4b7a-a52a-bdebed36f36b/extract-utilities/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.654406 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xccj6_f796f770-1b16-4b7a-a52a-bdebed36f36b/extract-content/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.654709 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xccj6_f796f770-1b16-4b7a-a52a-bdebed36f36b/extract-content/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.823610 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xccj6_f796f770-1b16-4b7a-a52a-bdebed36f36b/extract-utilities/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.840692 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xccj6_f796f770-1b16-4b7a-a52a-bdebed36f36b/extract-content/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.918543 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tpjt_ea119203-d4b1-426b-aa6e-4b49cb01f3a7/extract-utilities/0.log" Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.975414 5029 generic.go:334] "Generic (PLEG): container finished" podID="2016b030-5a23-4245-9cff-658274b6f93c" containerID="52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95" exitCode=0 Mar 13 21:57:48 crc kubenswrapper[5029]: I0313 21:57:48.975471 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr9wg" event={"ID":"2016b030-5a23-4245-9cff-658274b6f93c","Type":"ContainerDied","Data":"52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95"} Mar 13 21:57:49 crc kubenswrapper[5029]: I0313 21:57:49.062056 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xccj6_f796f770-1b16-4b7a-a52a-bdebed36f36b/registry-server/0.log" Mar 13 21:57:49 crc kubenswrapper[5029]: I0313 21:57:49.196404 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tpjt_ea119203-d4b1-426b-aa6e-4b49cb01f3a7/extract-content/0.log" Mar 13 21:57:49 crc kubenswrapper[5029]: I0313 21:57:49.205067 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tpjt_ea119203-d4b1-426b-aa6e-4b49cb01f3a7/extract-utilities/0.log" Mar 13 21:57:49 crc kubenswrapper[5029]: I0313 21:57:49.241548 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tpjt_ea119203-d4b1-426b-aa6e-4b49cb01f3a7/extract-content/0.log" Mar 13 21:57:49 crc kubenswrapper[5029]: I0313 21:57:49.394327 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tpjt_ea119203-d4b1-426b-aa6e-4b49cb01f3a7/extract-utilities/0.log" Mar 13 21:57:49 crc kubenswrapper[5029]: I0313 21:57:49.410080 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tpjt_ea119203-d4b1-426b-aa6e-4b49cb01f3a7/extract-content/0.log" Mar 13 21:57:50 crc kubenswrapper[5029]: I0313 21:57:50.017896 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr9wg" event={"ID":"2016b030-5a23-4245-9cff-658274b6f93c","Type":"ContainerStarted","Data":"5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e"} Mar 13 21:57:50 crc kubenswrapper[5029]: I0313 21:57:50.050441 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cr9wg" podStartSLOduration=3.372056042 podStartE2EDuration="6.050423289s" podCreationTimestamp="2026-03-13 21:57:44 +0000 UTC" firstStartedPulling="2026-03-13 21:57:46.953622747 +0000 UTC m=+5426.969705150" lastFinishedPulling="2026-03-13 21:57:49.631989994 +0000 UTC m=+5429.648072397" observedRunningTime="2026-03-13 21:57:50.043821907 +0000 UTC m=+5430.059904310" watchObservedRunningTime="2026-03-13 21:57:50.050423289 +0000 UTC m=+5430.066505692" Mar 13 21:57:50 crc kubenswrapper[5029]: I0313 21:57:50.092090 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tpjt_ea119203-d4b1-426b-aa6e-4b49cb01f3a7/registry-server/0.log" Mar 13 21:57:51 crc kubenswrapper[5029]: I0313 21:57:51.600925 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:57:51 crc kubenswrapper[5029]: E0313 21:57:51.601663 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:57:54 crc kubenswrapper[5029]: I0313 21:57:54.518830 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:54 crc kubenswrapper[5029]: I0313 21:57:54.519597 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:54 crc kubenswrapper[5029]: I0313 21:57:54.615158 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:55 crc kubenswrapper[5029]: I0313 21:57:55.150310 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:56 crc kubenswrapper[5029]: I0313 21:57:56.149921 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cr9wg"] Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.094338 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cr9wg" podUID="2016b030-5a23-4245-9cff-658274b6f93c" containerName="registry-server" containerID="cri-o://5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e" gracePeriod=2 Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.669123 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.743867 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drk84\" (UniqueName: \"kubernetes.io/projected/2016b030-5a23-4245-9cff-658274b6f93c-kube-api-access-drk84\") pod \"2016b030-5a23-4245-9cff-658274b6f93c\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.744032 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-catalog-content\") pod \"2016b030-5a23-4245-9cff-658274b6f93c\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.744200 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-utilities\") pod \"2016b030-5a23-4245-9cff-658274b6f93c\" (UID: \"2016b030-5a23-4245-9cff-658274b6f93c\") " Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.745276 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-utilities" (OuterVolumeSpecName: "utilities") pod "2016b030-5a23-4245-9cff-658274b6f93c" (UID: "2016b030-5a23-4245-9cff-658274b6f93c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.757311 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2016b030-5a23-4245-9cff-658274b6f93c-kube-api-access-drk84" (OuterVolumeSpecName: "kube-api-access-drk84") pod "2016b030-5a23-4245-9cff-658274b6f93c" (UID: "2016b030-5a23-4245-9cff-658274b6f93c"). InnerVolumeSpecName "kube-api-access-drk84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.803842 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2016b030-5a23-4245-9cff-658274b6f93c" (UID: "2016b030-5a23-4245-9cff-658274b6f93c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.847365 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drk84\" (UniqueName: \"kubernetes.io/projected/2016b030-5a23-4245-9cff-658274b6f93c-kube-api-access-drk84\") on node \"crc\" DevicePath \"\"" Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.847403 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 21:57:57 crc kubenswrapper[5029]: I0313 21:57:57.847413 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2016b030-5a23-4245-9cff-658274b6f93c-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.109428 5029 generic.go:334] "Generic (PLEG): container finished" podID="2016b030-5a23-4245-9cff-658274b6f93c" containerID="5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e" exitCode=0 Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.109512 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cr9wg" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.109507 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr9wg" event={"ID":"2016b030-5a23-4245-9cff-658274b6f93c","Type":"ContainerDied","Data":"5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e"} Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.109590 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr9wg" event={"ID":"2016b030-5a23-4245-9cff-658274b6f93c","Type":"ContainerDied","Data":"cb6f6a7cec65c17ba2ac1593b059650c7b9393d696d57def9eec53ef82aefd50"} Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.109648 5029 scope.go:117] "RemoveContainer" containerID="5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.156139 5029 scope.go:117] "RemoveContainer" containerID="52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.166656 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cr9wg"] Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.183136 5029 scope.go:117] "RemoveContainer" containerID="51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.186269 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cr9wg"] Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.224374 5029 scope.go:117] "RemoveContainer" containerID="5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e" Mar 13 21:57:58 crc kubenswrapper[5029]: E0313 21:57:58.225038 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e\": container with ID starting with 5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e not found: ID does not exist" containerID="5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.225101 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e"} err="failed to get container status \"5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e\": rpc error: code = NotFound desc = could not find container \"5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e\": container with ID starting with 5722562f10c5c34e4eac66ea04111570a8b89e628074b41e3ebf0cede427c89e not found: ID does not exist" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.225139 5029 scope.go:117] "RemoveContainer" containerID="52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95" Mar 13 21:57:58 crc kubenswrapper[5029]: E0313 21:57:58.226992 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95\": container with ID starting with 52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95 not found: ID does not exist" containerID="52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.227115 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95"} err="failed to get container status \"52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95\": rpc error: code = NotFound desc = could not find container \"52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95\": container with ID starting with 52908f3b267dc473222aef4208df225f01e6f2a8a72904c86ff9f59131469d95 not found: ID does not exist" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.227157 5029 scope.go:117] "RemoveContainer" containerID="51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36" Mar 13 21:57:58 crc kubenswrapper[5029]: E0313 21:57:58.227706 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36\": container with ID starting with 51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36 not found: ID does not exist" containerID="51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.227771 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36"} err="failed to get container status \"51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36\": rpc error: code = NotFound desc = could not find container \"51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36\": container with ID starting with 51bc1a192fc3669604bcdf60480382f568d41671fe69517b97d1434ace003c36 not found: ID does not exist" Mar 13 21:57:58 crc kubenswrapper[5029]: I0313 21:57:58.618130 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2016b030-5a23-4245-9cff-658274b6f93c" path="/var/lib/kubelet/pods/2016b030-5a23-4245-9cff-658274b6f93c/volumes" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.150587 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557318-ddw78"] Mar 13 21:58:00 crc kubenswrapper[5029]: E0313 21:58:00.151336 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2016b030-5a23-4245-9cff-658274b6f93c" containerName="extract-content" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.151352 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2016b030-5a23-4245-9cff-658274b6f93c" containerName="extract-content" Mar 13 21:58:00 crc kubenswrapper[5029]: E0313 21:58:00.151363 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2016b030-5a23-4245-9cff-658274b6f93c" containerName="extract-utilities" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.151369 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2016b030-5a23-4245-9cff-658274b6f93c" containerName="extract-utilities" Mar 13 21:58:00 crc kubenswrapper[5029]: E0313 21:58:00.151377 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2016b030-5a23-4245-9cff-658274b6f93c" containerName="registry-server" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.151384 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="2016b030-5a23-4245-9cff-658274b6f93c" containerName="registry-server" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.151586 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="2016b030-5a23-4245-9cff-658274b6f93c" containerName="registry-server" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.152240 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557318-ddw78" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.155326 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.155463 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.155595 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.165009 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557318-ddw78"] Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.207049 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsqb\" (UniqueName: \"kubernetes.io/projected/546d8392-d924-412c-90b3-9a0aa9ad9ff1-kube-api-access-qzsqb\") pod \"auto-csr-approver-29557318-ddw78\" (UID: \"546d8392-d924-412c-90b3-9a0aa9ad9ff1\") " pod="openshift-infra/auto-csr-approver-29557318-ddw78" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.309645 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzsqb\" (UniqueName: \"kubernetes.io/projected/546d8392-d924-412c-90b3-9a0aa9ad9ff1-kube-api-access-qzsqb\") pod \"auto-csr-approver-29557318-ddw78\" (UID: \"546d8392-d924-412c-90b3-9a0aa9ad9ff1\") " pod="openshift-infra/auto-csr-approver-29557318-ddw78" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.335379 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzsqb\" (UniqueName: \"kubernetes.io/projected/546d8392-d924-412c-90b3-9a0aa9ad9ff1-kube-api-access-qzsqb\") pod \"auto-csr-approver-29557318-ddw78\" (UID: \"546d8392-d924-412c-90b3-9a0aa9ad9ff1\") " pod="openshift-infra/auto-csr-approver-29557318-ddw78" Mar 13 21:58:00 crc kubenswrapper[5029]: I0313 21:58:00.475649 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557318-ddw78" Mar 13 21:58:01 crc kubenswrapper[5029]: I0313 21:58:01.015270 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557318-ddw78"] Mar 13 21:58:01 crc kubenswrapper[5029]: I0313 21:58:01.145789 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557318-ddw78" event={"ID":"546d8392-d924-412c-90b3-9a0aa9ad9ff1","Type":"ContainerStarted","Data":"491b66643250a5ea4ba298ca21ba3f1d7f4082d6abcf540cc89296566b6b1b79"} Mar 13 21:58:03 crc kubenswrapper[5029]: I0313 21:58:03.179533 5029 generic.go:334] "Generic (PLEG): container finished" podID="546d8392-d924-412c-90b3-9a0aa9ad9ff1" containerID="3d6a1ab7af89b202ad2c85a4dfdf49709cd8cbbf0bf87f44ba250b791381a958" exitCode=0 Mar 13 21:58:03 crc kubenswrapper[5029]: I0313 21:58:03.179615 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557318-ddw78" event={"ID":"546d8392-d924-412c-90b3-9a0aa9ad9ff1","Type":"ContainerDied","Data":"3d6a1ab7af89b202ad2c85a4dfdf49709cd8cbbf0bf87f44ba250b791381a958"} Mar 13 21:58:04 crc kubenswrapper[5029]: I0313 21:58:04.574936 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557318-ddw78" Mar 13 21:58:04 crc kubenswrapper[5029]: I0313 21:58:04.599816 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:58:04 crc kubenswrapper[5029]: E0313 21:58:04.600319 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:58:04 crc kubenswrapper[5029]: I0313 21:58:04.715845 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzsqb\" (UniqueName: \"kubernetes.io/projected/546d8392-d924-412c-90b3-9a0aa9ad9ff1-kube-api-access-qzsqb\") pod \"546d8392-d924-412c-90b3-9a0aa9ad9ff1\" (UID: \"546d8392-d924-412c-90b3-9a0aa9ad9ff1\") " Mar 13 21:58:04 crc kubenswrapper[5029]: I0313 21:58:04.727271 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/546d8392-d924-412c-90b3-9a0aa9ad9ff1-kube-api-access-qzsqb" (OuterVolumeSpecName: "kube-api-access-qzsqb") pod "546d8392-d924-412c-90b3-9a0aa9ad9ff1" (UID: "546d8392-d924-412c-90b3-9a0aa9ad9ff1"). InnerVolumeSpecName "kube-api-access-qzsqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 21:58:04 crc kubenswrapper[5029]: I0313 21:58:04.819475 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzsqb\" (UniqueName: \"kubernetes.io/projected/546d8392-d924-412c-90b3-9a0aa9ad9ff1-kube-api-access-qzsqb\") on node \"crc\" DevicePath \"\"" Mar 13 21:58:05 crc kubenswrapper[5029]: I0313 21:58:05.202079 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557318-ddw78" event={"ID":"546d8392-d924-412c-90b3-9a0aa9ad9ff1","Type":"ContainerDied","Data":"491b66643250a5ea4ba298ca21ba3f1d7f4082d6abcf540cc89296566b6b1b79"} Mar 13 21:58:05 crc kubenswrapper[5029]: I0313 21:58:05.202569 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="491b66643250a5ea4ba298ca21ba3f1d7f4082d6abcf540cc89296566b6b1b79" Mar 13 21:58:05 crc kubenswrapper[5029]: I0313 21:58:05.202152 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557318-ddw78" Mar 13 21:58:05 crc kubenswrapper[5029]: I0313 21:58:05.664637 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557312-4v7kt"] Mar 13 21:58:05 crc kubenswrapper[5029]: I0313 21:58:05.675875 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557312-4v7kt"] Mar 13 21:58:06 crc kubenswrapper[5029]: I0313 21:58:06.614769 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a27b5f-7a3b-4064-b605-26cb7b044d52" path="/var/lib/kubelet/pods/15a27b5f-7a3b-4064-b605-26cb7b044d52/volumes" Mar 13 21:58:16 crc kubenswrapper[5029]: I0313 21:58:16.526626 5029 scope.go:117] "RemoveContainer" containerID="e4acac8f446d947d5b47c53e0fbbd23ebfd801328031b90f9ef2ef8149743e62" Mar 13 21:58:17 crc kubenswrapper[5029]: I0313 21:58:17.599800 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:58:17 crc kubenswrapper[5029]: E0313 21:58:17.600354 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:58:21 crc kubenswrapper[5029]: E0313 21:58:21.903740 5029 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.181:56554->38.102.83.181:36147: write tcp 38.102.83.181:56554->38.102.83.181:36147: write: broken pipe Mar 13 21:58:31 crc kubenswrapper[5029]: I0313 21:58:31.599369 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:58:31 crc kubenswrapper[5029]: E0313 21:58:31.601412 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:58:32 crc kubenswrapper[5029]: E0313 21:58:32.149830 5029 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.181:51812->38.102.83.181:36147: read tcp 38.102.83.181:51812->38.102.83.181:36147: read: connection reset by peer Mar 13 21:58:42 crc kubenswrapper[5029]: I0313 21:58:42.599365 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:58:42 crc kubenswrapper[5029]: E0313 21:58:42.600369 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:58:55 crc kubenswrapper[5029]: I0313 21:58:55.599885 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:58:55 crc kubenswrapper[5029]: E0313 21:58:55.601120 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:59:10 crc kubenswrapper[5029]: I0313 21:59:10.609101 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:59:10 crc kubenswrapper[5029]: E0313 21:59:10.610043 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:59:23 crc kubenswrapper[5029]: I0313 21:59:23.602022 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:59:23 crc kubenswrapper[5029]: E0313 21:59:23.603765 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:59:35 crc kubenswrapper[5029]: I0313 21:59:35.599676 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:59:35 crc kubenswrapper[5029]: E0313 21:59:35.601081 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 21:59:49 crc kubenswrapper[5029]: I0313 21:59:49.600302 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 21:59:49 crc kubenswrapper[5029]: E0313 21:59:49.601401 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.164972 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557320-kjxrc"] Mar 13 22:00:00 crc kubenswrapper[5029]: E0313 22:00:00.166359 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="546d8392-d924-412c-90b3-9a0aa9ad9ff1" containerName="oc" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.166382 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="546d8392-d924-412c-90b3-9a0aa9ad9ff1" containerName="oc" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.166680 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="546d8392-d924-412c-90b3-9a0aa9ad9ff1" containerName="oc" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.167729 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557320-kjxrc" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.171424 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.171816 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.172172 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.183504 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7"] Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.185337 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.187951 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.188211 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.193665 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557320-kjxrc"] Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.204120 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7"] Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.355145 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdpg\" (UniqueName: \"kubernetes.io/projected/8ebbbcdf-2f5c-46f1-961d-5c05678b8fec-kube-api-access-prdpg\") pod \"auto-csr-approver-29557320-kjxrc\" (UID: \"8ebbbcdf-2f5c-46f1-961d-5c05678b8fec\") " pod="openshift-infra/auto-csr-approver-29557320-kjxrc" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.355248 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-config-volume\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.355309 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj527\" (UniqueName: \"kubernetes.io/projected/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-kube-api-access-kj527\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.355331 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-secret-volume\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.458825 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdpg\" (UniqueName: \"kubernetes.io/projected/8ebbbcdf-2f5c-46f1-961d-5c05678b8fec-kube-api-access-prdpg\") pod \"auto-csr-approver-29557320-kjxrc\" (UID: \"8ebbbcdf-2f5c-46f1-961d-5c05678b8fec\") " pod="openshift-infra/auto-csr-approver-29557320-kjxrc" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.458992 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-config-volume\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.459079 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj527\" (UniqueName: \"kubernetes.io/projected/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-kube-api-access-kj527\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.459133 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-secret-volume\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.462337 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-config-volume\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.478623 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-secret-volume\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.490373 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj527\" (UniqueName: \"kubernetes.io/projected/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-kube-api-access-kj527\") pod \"collect-profiles-29557320-6k2k7\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:00 crc kubenswrapper[5029]: I0313 22:00:00.537610 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:01 crc kubenswrapper[5029]: I0313 22:00:01.278638 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdpg\" (UniqueName: \"kubernetes.io/projected/8ebbbcdf-2f5c-46f1-961d-5c05678b8fec-kube-api-access-prdpg\") pod \"auto-csr-approver-29557320-kjxrc\" (UID: \"8ebbbcdf-2f5c-46f1-961d-5c05678b8fec\") " pod="openshift-infra/auto-csr-approver-29557320-kjxrc" Mar 13 22:00:01 crc kubenswrapper[5029]: I0313 22:00:01.425393 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557320-kjxrc" Mar 13 22:00:01 crc kubenswrapper[5029]: I0313 22:00:01.870421 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7"] Mar 13 22:00:01 crc kubenswrapper[5029]: I0313 22:00:01.946093 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557320-kjxrc"] Mar 13 22:00:01 crc kubenswrapper[5029]: W0313 22:00:01.949152 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ebbbcdf_2f5c_46f1_961d_5c05678b8fec.slice/crio-bda941874b38e36aeb669ae53e02b37f3b41869f977344541ee6de2a6b604773 WatchSource:0}: Error finding container bda941874b38e36aeb669ae53e02b37f3b41869f977344541ee6de2a6b604773: Status 404 returned error can't find the container with id bda941874b38e36aeb669ae53e02b37f3b41869f977344541ee6de2a6b604773 Mar 13 22:00:02 crc kubenswrapper[5029]: I0313 22:00:02.016639 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557320-kjxrc" event={"ID":"8ebbbcdf-2f5c-46f1-961d-5c05678b8fec","Type":"ContainerStarted","Data":"bda941874b38e36aeb669ae53e02b37f3b41869f977344541ee6de2a6b604773"} Mar 13 22:00:02 crc kubenswrapper[5029]: I0313 22:00:02.018228 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" event={"ID":"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4","Type":"ContainerStarted","Data":"a472c71065f81e0a6bfac7deea7b7f957030ead91da0bbb8f82b571e27feb4ca"} Mar 13 22:00:02 crc kubenswrapper[5029]: I0313 22:00:02.600901 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:00:02 crc kubenswrapper[5029]: E0313 22:00:02.601505 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:00:03 crc kubenswrapper[5029]: I0313 22:00:03.035038 5029 generic.go:334] "Generic (PLEG): container finished" podID="4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4" containerID="68e8891dedf04a0161d05675713b005d1c46054ebc5edafe9ea64b3a83df3e5c" exitCode=0 Mar 13 22:00:03 crc kubenswrapper[5029]: I0313 22:00:03.035151 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" event={"ID":"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4","Type":"ContainerDied","Data":"68e8891dedf04a0161d05675713b005d1c46054ebc5edafe9ea64b3a83df3e5c"} Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.459409 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.565574 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj527\" (UniqueName: \"kubernetes.io/projected/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-kube-api-access-kj527\") pod \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.565716 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-secret-volume\") pod \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.565757 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-config-volume\") pod \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\" (UID: \"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4\") " Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.569008 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-config-volume" (OuterVolumeSpecName: "config-volume") pod "4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4" (UID: "4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.573904 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-kube-api-access-kj527" (OuterVolumeSpecName: "kube-api-access-kj527") pod "4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4" (UID: "4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4"). InnerVolumeSpecName "kube-api-access-kj527". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.579426 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4" (UID: "4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.691595 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj527\" (UniqueName: \"kubernetes.io/projected/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-kube-api-access-kj527\") on node \"crc\" DevicePath \"\"" Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.691634 5029 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 13 22:00:04 crc kubenswrapper[5029]: I0313 22:00:04.691645 5029 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4-config-volume\") on node \"crc\" DevicePath \"\"" Mar 13 22:00:04 crc kubenswrapper[5029]: E0313 22:00:04.848886 5029 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad9aab8_6731_4172_b0ea_e4a2fcfb1fc4.slice/crio-a472c71065f81e0a6bfac7deea7b7f957030ead91da0bbb8f82b571e27feb4ca\": RecentStats: unable to find data in memory cache]" Mar 13 22:00:05 crc kubenswrapper[5029]: I0313 22:00:05.063251 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" event={"ID":"4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4","Type":"ContainerDied","Data":"a472c71065f81e0a6bfac7deea7b7f957030ead91da0bbb8f82b571e27feb4ca"} Mar 13 22:00:05 crc kubenswrapper[5029]: I0313 22:00:05.063591 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a472c71065f81e0a6bfac7deea7b7f957030ead91da0bbb8f82b571e27feb4ca" Mar 13 22:00:05 crc kubenswrapper[5029]: I0313 22:00:05.063813 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557320-6k2k7" Mar 13 22:00:05 crc kubenswrapper[5029]: I0313 22:00:05.568414 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt"] Mar 13 22:00:05 crc kubenswrapper[5029]: I0313 22:00:05.583131 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557275-tvxdt"] Mar 13 22:00:06 crc kubenswrapper[5029]: I0313 22:00:06.080021 5029 generic.go:334] "Generic (PLEG): container finished" podID="8ebbbcdf-2f5c-46f1-961d-5c05678b8fec" containerID="ade32f653dd781c631df695a399e8e9665f6b8ed3d9ee447eb5c07a10a504a62" exitCode=0 Mar 13 22:00:06 crc kubenswrapper[5029]: I0313 22:00:06.080073 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557320-kjxrc" event={"ID":"8ebbbcdf-2f5c-46f1-961d-5c05678b8fec","Type":"ContainerDied","Data":"ade32f653dd781c631df695a399e8e9665f6b8ed3d9ee447eb5c07a10a504a62"} Mar 13 22:00:06 crc kubenswrapper[5029]: I0313 22:00:06.618401 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d53327-dc50-4621-95a2-1b17821475f5" path="/var/lib/kubelet/pods/63d53327-dc50-4621-95a2-1b17821475f5/volumes" Mar 13 22:00:07 crc kubenswrapper[5029]: I0313 22:00:07.490797 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557320-kjxrc" Mar 13 22:00:07 crc kubenswrapper[5029]: I0313 22:00:07.680940 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prdpg\" (UniqueName: \"kubernetes.io/projected/8ebbbcdf-2f5c-46f1-961d-5c05678b8fec-kube-api-access-prdpg\") pod \"8ebbbcdf-2f5c-46f1-961d-5c05678b8fec\" (UID: \"8ebbbcdf-2f5c-46f1-961d-5c05678b8fec\") " Mar 13 22:00:07 crc kubenswrapper[5029]: I0313 22:00:07.699934 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebbbcdf-2f5c-46f1-961d-5c05678b8fec-kube-api-access-prdpg" (OuterVolumeSpecName: "kube-api-access-prdpg") pod "8ebbbcdf-2f5c-46f1-961d-5c05678b8fec" (UID: "8ebbbcdf-2f5c-46f1-961d-5c05678b8fec"). InnerVolumeSpecName "kube-api-access-prdpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 22:00:07 crc kubenswrapper[5029]: I0313 22:00:07.786272 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prdpg\" (UniqueName: \"kubernetes.io/projected/8ebbbcdf-2f5c-46f1-961d-5c05678b8fec-kube-api-access-prdpg\") on node \"crc\" DevicePath \"\"" Mar 13 22:00:08 crc kubenswrapper[5029]: I0313 22:00:08.105838 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557320-kjxrc" event={"ID":"8ebbbcdf-2f5c-46f1-961d-5c05678b8fec","Type":"ContainerDied","Data":"bda941874b38e36aeb669ae53e02b37f3b41869f977344541ee6de2a6b604773"} Mar 13 22:00:08 crc kubenswrapper[5029]: I0313 22:00:08.106696 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda941874b38e36aeb669ae53e02b37f3b41869f977344541ee6de2a6b604773" Mar 13 22:00:08 crc kubenswrapper[5029]: I0313 22:00:08.105930 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557320-kjxrc" Mar 13 22:00:08 crc kubenswrapper[5029]: I0313 22:00:08.587881 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557314-jp4cs"] Mar 13 22:00:08 crc kubenswrapper[5029]: I0313 22:00:08.628076 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557314-jp4cs"] Mar 13 22:00:10 crc kubenswrapper[5029]: I0313 22:00:10.620590 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3" path="/var/lib/kubelet/pods/020a8cb4-f8f7-4ec8-a0d5-0a9d850bf3f3/volumes" Mar 13 22:00:14 crc kubenswrapper[5029]: I0313 22:00:14.600973 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:00:14 crc kubenswrapper[5029]: E0313 22:00:14.601945 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:00:16 crc kubenswrapper[5029]: I0313 22:00:16.794615 5029 scope.go:117] "RemoveContainer" containerID="9ec70c8553e2308f8be35ca67ac8a7099011acaf599c45b63563d7f7cd7b2537" Mar 13 22:00:17 crc kubenswrapper[5029]: I0313 22:00:17.491349 5029 scope.go:117] "RemoveContainer" containerID="0825a54fceb1381a2822ccf6bcbc2466e86f3d832fa59d7877694185da8e1da5" Mar 13 22:00:17 crc kubenswrapper[5029]: I0313 22:00:17.562090 5029 scope.go:117] "RemoveContainer" containerID="90c13895a650bb86923f1ba10feb29fec99a443f3f85b65a8c5768758ad4d216" Mar 13 22:00:25 crc kubenswrapper[5029]: I0313 22:00:25.599603 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:00:25 crc kubenswrapper[5029]: E0313 22:00:25.600352 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:00:29 crc kubenswrapper[5029]: I0313 22:00:29.367468 5029 generic.go:334] "Generic (PLEG): container finished" podID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerID="ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df" exitCode=0 Mar 13 22:00:29 crc kubenswrapper[5029]: I0313 22:00:29.367567 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" event={"ID":"69272fdf-af43-4ca2-8597-3f4d2fc412da","Type":"ContainerDied","Data":"ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df"} Mar 13 22:00:29 crc kubenswrapper[5029]: I0313 22:00:29.369425 5029 scope.go:117] "RemoveContainer" containerID="ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df" Mar 13 22:00:29 crc kubenswrapper[5029]: I0313 22:00:29.761043 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9kdhx_must-gather-v6g8n_69272fdf-af43-4ca2-8597-3f4d2fc412da/gather/0.log" Mar 13 22:00:37 crc kubenswrapper[5029]: I0313 22:00:37.600073 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:00:37 crc kubenswrapper[5029]: E0313 22:00:37.601387 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:00:38 crc kubenswrapper[5029]: I0313 22:00:38.955898 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9kdhx/must-gather-v6g8n"] Mar 13 22:00:38 crc kubenswrapper[5029]: I0313 22:00:38.956891 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" podUID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerName="copy" containerID="cri-o://a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f" gracePeriod=2 Mar 13 22:00:38 crc kubenswrapper[5029]: I0313 22:00:38.982460 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9kdhx/must-gather-v6g8n"] Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.467868 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9kdhx_must-gather-v6g8n_69272fdf-af43-4ca2-8597-3f4d2fc412da/copy/0.log" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.480187 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.498183 5029 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9kdhx_must-gather-v6g8n_69272fdf-af43-4ca2-8597-3f4d2fc412da/copy/0.log" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.499136 5029 generic.go:334] "Generic (PLEG): container finished" podID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerID="a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f" exitCode=143 Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.499233 5029 scope.go:117] "RemoveContainer" containerID="a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.553034 5029 scope.go:117] "RemoveContainer" containerID="ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.616108 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/69272fdf-af43-4ca2-8597-3f4d2fc412da-kube-api-access-z76bt\") pod \"69272fdf-af43-4ca2-8597-3f4d2fc412da\" (UID: \"69272fdf-af43-4ca2-8597-3f4d2fc412da\") " Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.616316 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69272fdf-af43-4ca2-8597-3f4d2fc412da-must-gather-output\") pod \"69272fdf-af43-4ca2-8597-3f4d2fc412da\" (UID: \"69272fdf-af43-4ca2-8597-3f4d2fc412da\") " Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.623551 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69272fdf-af43-4ca2-8597-3f4d2fc412da-kube-api-access-z76bt" (OuterVolumeSpecName: "kube-api-access-z76bt") pod "69272fdf-af43-4ca2-8597-3f4d2fc412da" (UID: "69272fdf-af43-4ca2-8597-3f4d2fc412da"). InnerVolumeSpecName "kube-api-access-z76bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.665001 5029 scope.go:117] "RemoveContainer" containerID="a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f" Mar 13 22:00:39 crc kubenswrapper[5029]: E0313 22:00:39.686316 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f\": container with ID starting with a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f not found: ID does not exist" containerID="a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.686396 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f"} err="failed to get container status \"a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f\": rpc error: code = NotFound desc = could not find container \"a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f\": container with ID starting with a7ab25d7acc7d2e275195c721ee44f31d476b4a266457bdde60cf29f60caa73f not found: ID does not exist" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.686431 5029 scope.go:117] "RemoveContainer" containerID="ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df" Mar 13 22:00:39 crc kubenswrapper[5029]: E0313 22:00:39.687905 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df\": container with ID starting with ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df not found: ID does not exist" containerID="ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.687998 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df"} err="failed to get container status \"ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df\": rpc error: code = NotFound desc = could not find container \"ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df\": container with ID starting with ac7fe341e78e050fb7f02fd0601173f64f3d3384e8eb9a40c760bb14c1e255df not found: ID does not exist" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.718669 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/69272fdf-af43-4ca2-8597-3f4d2fc412da-kube-api-access-z76bt\") on node \"crc\" DevicePath \"\"" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.851026 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69272fdf-af43-4ca2-8597-3f4d2fc412da-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "69272fdf-af43-4ca2-8597-3f4d2fc412da" (UID: "69272fdf-af43-4ca2-8597-3f4d2fc412da"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 22:00:39 crc kubenswrapper[5029]: I0313 22:00:39.924463 5029 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69272fdf-af43-4ca2-8597-3f4d2fc412da-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 13 22:00:40 crc kubenswrapper[5029]: I0313 22:00:40.558148 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9kdhx/must-gather-v6g8n" Mar 13 22:00:40 crc kubenswrapper[5029]: I0313 22:00:40.638044 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69272fdf-af43-4ca2-8597-3f4d2fc412da" path="/var/lib/kubelet/pods/69272fdf-af43-4ca2-8597-3f4d2fc412da/volumes" Mar 13 22:00:51 crc kubenswrapper[5029]: I0313 22:00:51.600144 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:00:51 crc kubenswrapper[5029]: E0313 22:00:51.601171 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.168233 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29557321-dprrs"] Mar 13 22:01:00 crc kubenswrapper[5029]: E0313 22:01:00.170157 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerName="gather" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.170194 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerName="gather" Mar 13 22:01:00 crc kubenswrapper[5029]: E0313 22:01:00.170222 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4" containerName="collect-profiles" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.170233 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4" containerName="collect-profiles" Mar 13 22:01:00 crc kubenswrapper[5029]: E0313 22:01:00.170301 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebbbcdf-2f5c-46f1-961d-5c05678b8fec" containerName="oc" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.170313 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebbbcdf-2f5c-46f1-961d-5c05678b8fec" containerName="oc" Mar 13 22:01:00 crc kubenswrapper[5029]: E0313 22:01:00.170344 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerName="copy" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.170355 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerName="copy" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.170783 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebbbcdf-2f5c-46f1-961d-5c05678b8fec" containerName="oc" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.170814 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerName="gather" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.170937 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad9aab8-6731-4172-b0ea-e4a2fcfb1fc4" containerName="collect-profiles" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.170969 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="69272fdf-af43-4ca2-8597-3f4d2fc412da" containerName="copy" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.172638 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.184811 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29557321-dprrs"] Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.282864 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmwd\" (UniqueName: \"kubernetes.io/projected/c61ecb7d-bea1-458c-aa0e-e574057d52ca-kube-api-access-mxmwd\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.283039 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-combined-ca-bundle\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.283097 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-fernet-keys\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.283212 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-config-data\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.385616 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-config-data\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.386026 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxmwd\" (UniqueName: \"kubernetes.io/projected/c61ecb7d-bea1-458c-aa0e-e574057d52ca-kube-api-access-mxmwd\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.386445 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-combined-ca-bundle\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.386584 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-fernet-keys\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.394357 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-fernet-keys\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.394937 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-config-data\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.395971 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-combined-ca-bundle\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.425117 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxmwd\" (UniqueName: \"kubernetes.io/projected/c61ecb7d-bea1-458c-aa0e-e574057d52ca-kube-api-access-mxmwd\") pod \"keystone-cron-29557321-dprrs\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:00 crc kubenswrapper[5029]: I0313 22:01:00.498214 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:01 crc kubenswrapper[5029]: I0313 22:01:01.043882 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29557321-dprrs"] Mar 13 22:01:01 crc kubenswrapper[5029]: I0313 22:01:01.795946 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557321-dprrs" event={"ID":"c61ecb7d-bea1-458c-aa0e-e574057d52ca","Type":"ContainerStarted","Data":"5381c9342053600e16e8cac67623e5fff7ef13664e0fb67616a4c2b083d05696"} Mar 13 22:01:01 crc kubenswrapper[5029]: I0313 22:01:01.797091 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557321-dprrs" event={"ID":"c61ecb7d-bea1-458c-aa0e-e574057d52ca","Type":"ContainerStarted","Data":"64c8403693dced7505d23138ddbb624f664be6229fec466de7596d7d332c4b50"} Mar 13 22:01:01 crc kubenswrapper[5029]: I0313 22:01:01.818242 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29557321-dprrs" podStartSLOduration=1.8182234259999999 podStartE2EDuration="1.818223426s" podCreationTimestamp="2026-03-13 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 22:01:01.818071752 +0000 UTC m=+5621.834154175" watchObservedRunningTime="2026-03-13 22:01:01.818223426 +0000 UTC m=+5621.834305829" Mar 13 22:01:03 crc kubenswrapper[5029]: I0313 22:01:03.599743 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:01:03 crc kubenswrapper[5029]: E0313 22:01:03.600435 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:01:04 crc kubenswrapper[5029]: I0313 22:01:04.822451 5029 generic.go:334] "Generic (PLEG): container finished" podID="c61ecb7d-bea1-458c-aa0e-e574057d52ca" containerID="5381c9342053600e16e8cac67623e5fff7ef13664e0fb67616a4c2b083d05696" exitCode=0 Mar 13 22:01:04 crc kubenswrapper[5029]: I0313 22:01:04.822536 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557321-dprrs" event={"ID":"c61ecb7d-bea1-458c-aa0e-e574057d52ca","Type":"ContainerDied","Data":"5381c9342053600e16e8cac67623e5fff7ef13664e0fb67616a4c2b083d05696"} Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.262942 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.323705 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-combined-ca-bundle\") pod \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.323869 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxmwd\" (UniqueName: \"kubernetes.io/projected/c61ecb7d-bea1-458c-aa0e-e574057d52ca-kube-api-access-mxmwd\") pod \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.324363 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-config-data\") pod \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.324794 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-fernet-keys\") pod \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\" (UID: \"c61ecb7d-bea1-458c-aa0e-e574057d52ca\") " Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.330718 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c61ecb7d-bea1-458c-aa0e-e574057d52ca" (UID: "c61ecb7d-bea1-458c-aa0e-e574057d52ca"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.332945 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c61ecb7d-bea1-458c-aa0e-e574057d52ca-kube-api-access-mxmwd" (OuterVolumeSpecName: "kube-api-access-mxmwd") pod "c61ecb7d-bea1-458c-aa0e-e574057d52ca" (UID: "c61ecb7d-bea1-458c-aa0e-e574057d52ca"). InnerVolumeSpecName "kube-api-access-mxmwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.361029 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c61ecb7d-bea1-458c-aa0e-e574057d52ca" (UID: "c61ecb7d-bea1-458c-aa0e-e574057d52ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.400277 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-config-data" (OuterVolumeSpecName: "config-data") pod "c61ecb7d-bea1-458c-aa0e-e574057d52ca" (UID: "c61ecb7d-bea1-458c-aa0e-e574057d52ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.427636 5029 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-config-data\") on node \"crc\" DevicePath \"\"" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.427973 5029 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.428038 5029 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61ecb7d-bea1-458c-aa0e-e574057d52ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.428099 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxmwd\" (UniqueName: \"kubernetes.io/projected/c61ecb7d-bea1-458c-aa0e-e574057d52ca-kube-api-access-mxmwd\") on node \"crc\" DevicePath \"\"" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.846791 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557321-dprrs" event={"ID":"c61ecb7d-bea1-458c-aa0e-e574057d52ca","Type":"ContainerDied","Data":"64c8403693dced7505d23138ddbb624f664be6229fec466de7596d7d332c4b50"} Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.847261 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c8403693dced7505d23138ddbb624f664be6229fec466de7596d7d332c4b50" Mar 13 22:01:06 crc kubenswrapper[5029]: I0313 22:01:06.846844 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557321-dprrs" Mar 13 22:01:14 crc kubenswrapper[5029]: I0313 22:01:14.600041 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:01:14 crc kubenswrapper[5029]: E0313 22:01:14.600938 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:01:17 crc kubenswrapper[5029]: I0313 22:01:17.751386 5029 scope.go:117] "RemoveContainer" containerID="7bc2f7e8a7a22e6e01c632936da08d91539c295cef68d9214e5d92bc8ceacee2" Mar 13 22:01:27 crc kubenswrapper[5029]: I0313 22:01:27.599792 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:01:27 crc kubenswrapper[5029]: E0313 22:01:27.600795 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:01:32 crc kubenswrapper[5029]: I0313 22:01:32.850293 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-59ls5"] Mar 13 22:01:32 crc kubenswrapper[5029]: E0313 22:01:32.852294 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61ecb7d-bea1-458c-aa0e-e574057d52ca" containerName="keystone-cron" Mar 13 22:01:32 crc kubenswrapper[5029]: I0313 22:01:32.852313 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61ecb7d-bea1-458c-aa0e-e574057d52ca" containerName="keystone-cron" Mar 13 22:01:32 crc kubenswrapper[5029]: I0313 22:01:32.852532 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61ecb7d-bea1-458c-aa0e-e574057d52ca" containerName="keystone-cron" Mar 13 22:01:32 crc kubenswrapper[5029]: I0313 22:01:32.859529 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:32 crc kubenswrapper[5029]: I0313 22:01:32.890493 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-59ls5"] Mar 13 22:01:32 crc kubenswrapper[5029]: I0313 22:01:32.932653 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-catalog-content\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:32 crc kubenswrapper[5029]: I0313 22:01:32.933048 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-utilities\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:32 crc kubenswrapper[5029]: I0313 22:01:32.936569 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdql\" (UniqueName: \"kubernetes.io/projected/bf3ab57a-d640-4189-b8ab-f957bfd56c31-kube-api-access-bsdql\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:33 crc kubenswrapper[5029]: I0313 22:01:33.038741 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsdql\" (UniqueName: \"kubernetes.io/projected/bf3ab57a-d640-4189-b8ab-f957bfd56c31-kube-api-access-bsdql\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:33 crc kubenswrapper[5029]: I0313 22:01:33.038911 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-catalog-content\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:33 crc kubenswrapper[5029]: I0313 22:01:33.038962 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-utilities\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:33 crc kubenswrapper[5029]: I0313 22:01:33.039588 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-utilities\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:33 crc kubenswrapper[5029]: I0313 22:01:33.039699 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-catalog-content\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:33 crc kubenswrapper[5029]: I0313 22:01:33.065365 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsdql\" (UniqueName: \"kubernetes.io/projected/bf3ab57a-d640-4189-b8ab-f957bfd56c31-kube-api-access-bsdql\") pod \"redhat-marketplace-59ls5\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:33 crc kubenswrapper[5029]: I0313 22:01:33.190551 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:33 crc kubenswrapper[5029]: I0313 22:01:33.782976 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-59ls5"] Mar 13 22:01:34 crc kubenswrapper[5029]: I0313 22:01:34.144242 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59ls5" event={"ID":"bf3ab57a-d640-4189-b8ab-f957bfd56c31","Type":"ContainerStarted","Data":"3b9c830aeebd6ca821f8c2faa652866f75a323d9fdcfdb6b29b8d32e7a01d1c4"} Mar 13 22:01:35 crc kubenswrapper[5029]: I0313 22:01:35.157934 5029 generic.go:334] "Generic (PLEG): container finished" podID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerID="c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a" exitCode=0 Mar 13 22:01:35 crc kubenswrapper[5029]: I0313 22:01:35.157994 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59ls5" event={"ID":"bf3ab57a-d640-4189-b8ab-f957bfd56c31","Type":"ContainerDied","Data":"c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a"} Mar 13 22:01:37 crc kubenswrapper[5029]: I0313 22:01:37.181685 5029 generic.go:334] "Generic (PLEG): container finished" podID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerID="dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97" exitCode=0 Mar 13 22:01:37 crc kubenswrapper[5029]: I0313 22:01:37.181821 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59ls5" event={"ID":"bf3ab57a-d640-4189-b8ab-f957bfd56c31","Type":"ContainerDied","Data":"dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97"} Mar 13 22:01:38 crc kubenswrapper[5029]: I0313 22:01:38.194987 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59ls5" event={"ID":"bf3ab57a-d640-4189-b8ab-f957bfd56c31","Type":"ContainerStarted","Data":"b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f"} Mar 13 22:01:38 crc kubenswrapper[5029]: I0313 22:01:38.219161 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-59ls5" podStartSLOduration=3.664904398 podStartE2EDuration="6.219138029s" podCreationTimestamp="2026-03-13 22:01:32 +0000 UTC" firstStartedPulling="2026-03-13 22:01:35.16056194 +0000 UTC m=+5655.176644343" lastFinishedPulling="2026-03-13 22:01:37.714795571 +0000 UTC m=+5657.730877974" observedRunningTime="2026-03-13 22:01:38.218571312 +0000 UTC m=+5658.234653755" watchObservedRunningTime="2026-03-13 22:01:38.219138029 +0000 UTC m=+5658.235220432" Mar 13 22:01:41 crc kubenswrapper[5029]: I0313 22:01:41.602702 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:01:41 crc kubenswrapper[5029]: E0313 22:01:41.603928 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:01:43 crc kubenswrapper[5029]: I0313 22:01:43.194319 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:43 crc kubenswrapper[5029]: I0313 22:01:43.194893 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:43 crc kubenswrapper[5029]: I0313 22:01:43.275398 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:43 crc kubenswrapper[5029]: I0313 22:01:43.340395 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:43 crc kubenswrapper[5029]: I0313 22:01:43.515634 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-59ls5"] Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.264542 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-59ls5" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerName="registry-server" containerID="cri-o://b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f" gracePeriod=2 Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.741359 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.883581 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-catalog-content\") pod \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.883730 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-utilities\") pod \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.884679 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-utilities" (OuterVolumeSpecName: "utilities") pod "bf3ab57a-d640-4189-b8ab-f957bfd56c31" (UID: "bf3ab57a-d640-4189-b8ab-f957bfd56c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.885025 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsdql\" (UniqueName: \"kubernetes.io/projected/bf3ab57a-d640-4189-b8ab-f957bfd56c31-kube-api-access-bsdql\") pod \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\" (UID: \"bf3ab57a-d640-4189-b8ab-f957bfd56c31\") " Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.886053 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.895870 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3ab57a-d640-4189-b8ab-f957bfd56c31-kube-api-access-bsdql" (OuterVolumeSpecName: "kube-api-access-bsdql") pod "bf3ab57a-d640-4189-b8ab-f957bfd56c31" (UID: "bf3ab57a-d640-4189-b8ab-f957bfd56c31"). InnerVolumeSpecName "kube-api-access-bsdql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.914044 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf3ab57a-d640-4189-b8ab-f957bfd56c31" (UID: "bf3ab57a-d640-4189-b8ab-f957bfd56c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.988391 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsdql\" (UniqueName: \"kubernetes.io/projected/bf3ab57a-d640-4189-b8ab-f957bfd56c31-kube-api-access-bsdql\") on node \"crc\" DevicePath \"\"" Mar 13 22:01:45 crc kubenswrapper[5029]: I0313 22:01:45.988449 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3ab57a-d640-4189-b8ab-f957bfd56c31-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.277297 5029 generic.go:334] "Generic (PLEG): container finished" podID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerID="b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f" exitCode=0 Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.277355 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59ls5" event={"ID":"bf3ab57a-d640-4189-b8ab-f957bfd56c31","Type":"ContainerDied","Data":"b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f"} Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.277396 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59ls5" event={"ID":"bf3ab57a-d640-4189-b8ab-f957bfd56c31","Type":"ContainerDied","Data":"3b9c830aeebd6ca821f8c2faa652866f75a323d9fdcfdb6b29b8d32e7a01d1c4"} Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.277418 5029 scope.go:117] "RemoveContainer" containerID="b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.277579 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59ls5" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.320335 5029 scope.go:117] "RemoveContainer" containerID="dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.331792 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-59ls5"] Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.347009 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-59ls5"] Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.348538 5029 scope.go:117] "RemoveContainer" containerID="c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.425033 5029 scope.go:117] "RemoveContainer" containerID="b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f" Mar 13 22:01:46 crc kubenswrapper[5029]: E0313 22:01:46.426090 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f\": container with ID starting with b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f not found: ID does not exist" containerID="b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.426163 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f"} err="failed to get container status \"b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f\": rpc error: code = NotFound desc = could not find container \"b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f\": container with ID starting with b286b230dfcaa5ef0308b323a03f36e361a7d2888922d920b57bbbc4c16f867f not found: ID does not exist" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.426205 5029 scope.go:117] "RemoveContainer" containerID="dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97" Mar 13 22:01:46 crc kubenswrapper[5029]: E0313 22:01:46.426718 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97\": container with ID starting with dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97 not found: ID does not exist" containerID="dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.426793 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97"} err="failed to get container status \"dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97\": rpc error: code = NotFound desc = could not find container \"dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97\": container with ID starting with dbb3994550623719b00d1e4e34d4527bfeb35715f9d1553045db0bd8261dad97 not found: ID does not exist" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.426818 5029 scope.go:117] "RemoveContainer" containerID="c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a" Mar 13 22:01:46 crc kubenswrapper[5029]: E0313 22:01:46.427343 5029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a\": container with ID starting with c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a not found: ID does not exist" containerID="c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.427386 5029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a"} err="failed to get container status \"c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a\": rpc error: code = NotFound desc = could not find container \"c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a\": container with ID starting with c1e25c8cfa2b9abf6030cfad82b068789e57b584830c4044f4b0aa1fecda815a not found: ID does not exist" Mar 13 22:01:46 crc kubenswrapper[5029]: I0313 22:01:46.619524 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" path="/var/lib/kubelet/pods/bf3ab57a-d640-4189-b8ab-f957bfd56c31/volumes" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.599960 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:01:56 crc kubenswrapper[5029]: E0313 22:01:56.601022 5029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28st2_openshift-machine-config-operator(fa028723-a519-4f82-860c-4c149f3a4e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.666286 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bhqtn"] Mar 13 22:01:56 crc kubenswrapper[5029]: E0313 22:01:56.666805 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerName="extract-content" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.666823 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerName="extract-content" Mar 13 22:01:56 crc kubenswrapper[5029]: E0313 22:01:56.666894 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerName="extract-utilities" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.666902 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerName="extract-utilities" Mar 13 22:01:56 crc kubenswrapper[5029]: E0313 22:01:56.666916 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerName="registry-server" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.666922 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerName="registry-server" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.667122 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3ab57a-d640-4189-b8ab-f957bfd56c31" containerName="registry-server" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.668532 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.677073 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhqtn"] Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.769063 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-catalog-content\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.769648 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8kf9\" (UniqueName: \"kubernetes.io/projected/852002e2-e05a-46fb-aaa5-c499e4105235-kube-api-access-g8kf9\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.769840 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-utilities\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.871481 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-utilities\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.871542 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-catalog-content\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.871617 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8kf9\" (UniqueName: \"kubernetes.io/projected/852002e2-e05a-46fb-aaa5-c499e4105235-kube-api-access-g8kf9\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.872428 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-utilities\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.872488 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-catalog-content\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.907325 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8kf9\" (UniqueName: \"kubernetes.io/projected/852002e2-e05a-46fb-aaa5-c499e4105235-kube-api-access-g8kf9\") pod \"community-operators-bhqtn\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:56 crc kubenswrapper[5029]: I0313 22:01:56.989403 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:01:57 crc kubenswrapper[5029]: I0313 22:01:57.690723 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhqtn"] Mar 13 22:01:58 crc kubenswrapper[5029]: I0313 22:01:58.409374 5029 generic.go:334] "Generic (PLEG): container finished" podID="852002e2-e05a-46fb-aaa5-c499e4105235" containerID="a8f359de4b6afe6d0d85d37756a95c85e3f42c6c3fd1f88303a6f27720d537d6" exitCode=0 Mar 13 22:01:58 crc kubenswrapper[5029]: I0313 22:01:58.410115 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhqtn" event={"ID":"852002e2-e05a-46fb-aaa5-c499e4105235","Type":"ContainerDied","Data":"a8f359de4b6afe6d0d85d37756a95c85e3f42c6c3fd1f88303a6f27720d537d6"} Mar 13 22:01:58 crc kubenswrapper[5029]: I0313 22:01:58.410219 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhqtn" event={"ID":"852002e2-e05a-46fb-aaa5-c499e4105235","Type":"ContainerStarted","Data":"866e0f3ee3a19a8ea672fcc79c698c07286bb700f788643cb0fb5ed70b10e4fa"} Mar 13 22:01:59 crc kubenswrapper[5029]: I0313 22:01:59.419119 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhqtn" event={"ID":"852002e2-e05a-46fb-aaa5-c499e4105235","Type":"ContainerStarted","Data":"8b9d008c675009ba7964237d99d3c64986c8243733b8c848d01a2a0ac3ae015e"} Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.142557 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557322-nd4n8"] Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.143869 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.145941 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.146794 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.151533 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.160492 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557322-nd4n8"] Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.250603 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g6bf\" (UniqueName: \"kubernetes.io/projected/794c5eb7-f6fd-4bbc-824e-d795e4b254bc-kube-api-access-8g6bf\") pod \"auto-csr-approver-29557322-nd4n8\" (UID: \"794c5eb7-f6fd-4bbc-824e-d795e4b254bc\") " pod="openshift-infra/auto-csr-approver-29557322-nd4n8" Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.352820 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g6bf\" (UniqueName: \"kubernetes.io/projected/794c5eb7-f6fd-4bbc-824e-d795e4b254bc-kube-api-access-8g6bf\") pod \"auto-csr-approver-29557322-nd4n8\" (UID: \"794c5eb7-f6fd-4bbc-824e-d795e4b254bc\") " pod="openshift-infra/auto-csr-approver-29557322-nd4n8" Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.381357 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g6bf\" (UniqueName: \"kubernetes.io/projected/794c5eb7-f6fd-4bbc-824e-d795e4b254bc-kube-api-access-8g6bf\") pod \"auto-csr-approver-29557322-nd4n8\" (UID: \"794c5eb7-f6fd-4bbc-824e-d795e4b254bc\") " pod="openshift-infra/auto-csr-approver-29557322-nd4n8" Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.443592 5029 generic.go:334] "Generic (PLEG): container finished" podID="852002e2-e05a-46fb-aaa5-c499e4105235" containerID="8b9d008c675009ba7964237d99d3c64986c8243733b8c848d01a2a0ac3ae015e" exitCode=0 Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.443762 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhqtn" event={"ID":"852002e2-e05a-46fb-aaa5-c499e4105235","Type":"ContainerDied","Data":"8b9d008c675009ba7964237d99d3c64986c8243733b8c848d01a2a0ac3ae015e"} Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.461289 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" Mar 13 22:02:00 crc kubenswrapper[5029]: I0313 22:02:00.917288 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557322-nd4n8"] Mar 13 22:02:01 crc kubenswrapper[5029]: I0313 22:02:01.462281 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" event={"ID":"794c5eb7-f6fd-4bbc-824e-d795e4b254bc","Type":"ContainerStarted","Data":"4323cb8455c3d2613c43870ff03d8a197d2c632ea78d359014add8112b5f9511"} Mar 13 22:02:02 crc kubenswrapper[5029]: I0313 22:02:02.481468 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhqtn" event={"ID":"852002e2-e05a-46fb-aaa5-c499e4105235","Type":"ContainerStarted","Data":"9cf3eb975351b0c0cdcc8184150146aa26644157af804db6dc6b55c745cb59c5"} Mar 13 22:02:03 crc kubenswrapper[5029]: I0313 22:02:03.499023 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" event={"ID":"794c5eb7-f6fd-4bbc-824e-d795e4b254bc","Type":"ContainerStarted","Data":"8244e519d1b6081d84c1fe839fac5eae97eec46bbdedba125bc9a5a3cdd62cfd"} Mar 13 22:02:03 crc kubenswrapper[5029]: I0313 22:02:03.532901 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" podStartSLOduration=2.337224197 podStartE2EDuration="3.532879538s" podCreationTimestamp="2026-03-13 22:02:00 +0000 UTC" firstStartedPulling="2026-03-13 22:02:01.087112332 +0000 UTC m=+5681.103194745" lastFinishedPulling="2026-03-13 22:02:02.282767663 +0000 UTC m=+5682.298850086" observedRunningTime="2026-03-13 22:02:03.525368552 +0000 UTC m=+5683.541450985" watchObservedRunningTime="2026-03-13 22:02:03.532879538 +0000 UTC m=+5683.548961951" Mar 13 22:02:03 crc kubenswrapper[5029]: I0313 22:02:03.539141 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bhqtn" podStartSLOduration=4.872675331 podStartE2EDuration="7.539114509s" podCreationTimestamp="2026-03-13 22:01:56 +0000 UTC" firstStartedPulling="2026-03-13 22:01:58.412767507 +0000 UTC m=+5678.428849910" lastFinishedPulling="2026-03-13 22:02:01.079206645 +0000 UTC m=+5681.095289088" observedRunningTime="2026-03-13 22:02:02.537128457 +0000 UTC m=+5682.553210860" watchObservedRunningTime="2026-03-13 22:02:03.539114509 +0000 UTC m=+5683.555196932" Mar 13 22:02:04 crc kubenswrapper[5029]: I0313 22:02:04.516930 5029 generic.go:334] "Generic (PLEG): container finished" podID="794c5eb7-f6fd-4bbc-824e-d795e4b254bc" containerID="8244e519d1b6081d84c1fe839fac5eae97eec46bbdedba125bc9a5a3cdd62cfd" exitCode=0 Mar 13 22:02:04 crc kubenswrapper[5029]: I0313 22:02:04.517029 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" event={"ID":"794c5eb7-f6fd-4bbc-824e-d795e4b254bc","Type":"ContainerDied","Data":"8244e519d1b6081d84c1fe839fac5eae97eec46bbdedba125bc9a5a3cdd62cfd"} Mar 13 22:02:05 crc kubenswrapper[5029]: I0313 22:02:05.976074 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" Mar 13 22:02:05 crc kubenswrapper[5029]: I0313 22:02:05.985471 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g6bf\" (UniqueName: \"kubernetes.io/projected/794c5eb7-f6fd-4bbc-824e-d795e4b254bc-kube-api-access-8g6bf\") pod \"794c5eb7-f6fd-4bbc-824e-d795e4b254bc\" (UID: \"794c5eb7-f6fd-4bbc-824e-d795e4b254bc\") " Mar 13 22:02:05 crc kubenswrapper[5029]: I0313 22:02:05.991103 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794c5eb7-f6fd-4bbc-824e-d795e4b254bc-kube-api-access-8g6bf" (OuterVolumeSpecName: "kube-api-access-8g6bf") pod "794c5eb7-f6fd-4bbc-824e-d795e4b254bc" (UID: "794c5eb7-f6fd-4bbc-824e-d795e4b254bc"). InnerVolumeSpecName "kube-api-access-8g6bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 22:02:06 crc kubenswrapper[5029]: I0313 22:02:06.091348 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g6bf\" (UniqueName: \"kubernetes.io/projected/794c5eb7-f6fd-4bbc-824e-d795e4b254bc-kube-api-access-8g6bf\") on node \"crc\" DevicePath \"\"" Mar 13 22:02:06 crc kubenswrapper[5029]: I0313 22:02:06.555730 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" event={"ID":"794c5eb7-f6fd-4bbc-824e-d795e4b254bc","Type":"ContainerDied","Data":"4323cb8455c3d2613c43870ff03d8a197d2c632ea78d359014add8112b5f9511"} Mar 13 22:02:06 crc kubenswrapper[5029]: I0313 22:02:06.556023 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4323cb8455c3d2613c43870ff03d8a197d2c632ea78d359014add8112b5f9511" Mar 13 22:02:06 crc kubenswrapper[5029]: I0313 22:02:06.555775 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557322-nd4n8" Mar 13 22:02:06 crc kubenswrapper[5029]: I0313 22:02:06.621018 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557316-xwt2b"] Mar 13 22:02:06 crc kubenswrapper[5029]: I0313 22:02:06.628437 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557316-xwt2b"] Mar 13 22:02:06 crc kubenswrapper[5029]: I0313 22:02:06.990637 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:02:06 crc kubenswrapper[5029]: I0313 22:02:06.990698 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:02:07 crc kubenswrapper[5029]: I0313 22:02:07.044059 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:02:07 crc kubenswrapper[5029]: I0313 22:02:07.625911 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:02:07 crc kubenswrapper[5029]: I0313 22:02:07.702413 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhqtn"] Mar 13 22:02:08 crc kubenswrapper[5029]: I0313 22:02:08.617214 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731e82b-37f1-4810-90c4-fa12858b47f1" path="/var/lib/kubelet/pods/6731e82b-37f1-4810-90c4-fa12858b47f1/volumes" Mar 13 22:02:09 crc kubenswrapper[5029]: I0313 22:02:09.590194 5029 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bhqtn" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" containerName="registry-server" containerID="cri-o://9cf3eb975351b0c0cdcc8184150146aa26644157af804db6dc6b55c745cb59c5" gracePeriod=2 Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.611234 5029 scope.go:117] "RemoveContainer" containerID="87bd3f2be9cec432e523aa7487b1360c53872e7924a4ac8d32c3569955aa1057" Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.639659 5029 generic.go:334] "Generic (PLEG): container finished" podID="852002e2-e05a-46fb-aaa5-c499e4105235" containerID="9cf3eb975351b0c0cdcc8184150146aa26644157af804db6dc6b55c745cb59c5" exitCode=0 Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.646038 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhqtn" event={"ID":"852002e2-e05a-46fb-aaa5-c499e4105235","Type":"ContainerDied","Data":"9cf3eb975351b0c0cdcc8184150146aa26644157af804db6dc6b55c745cb59c5"} Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.646096 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhqtn" event={"ID":"852002e2-e05a-46fb-aaa5-c499e4105235","Type":"ContainerDied","Data":"866e0f3ee3a19a8ea672fcc79c698c07286bb700f788643cb0fb5ed70b10e4fa"} Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.646109 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866e0f3ee3a19a8ea672fcc79c698c07286bb700f788643cb0fb5ed70b10e4fa" Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.671757 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.803036 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8kf9\" (UniqueName: \"kubernetes.io/projected/852002e2-e05a-46fb-aaa5-c499e4105235-kube-api-access-g8kf9\") pod \"852002e2-e05a-46fb-aaa5-c499e4105235\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.803181 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-catalog-content\") pod \"852002e2-e05a-46fb-aaa5-c499e4105235\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.803251 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-utilities\") pod \"852002e2-e05a-46fb-aaa5-c499e4105235\" (UID: \"852002e2-e05a-46fb-aaa5-c499e4105235\") " Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.804114 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-utilities" (OuterVolumeSpecName: "utilities") pod "852002e2-e05a-46fb-aaa5-c499e4105235" (UID: "852002e2-e05a-46fb-aaa5-c499e4105235"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.813694 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852002e2-e05a-46fb-aaa5-c499e4105235-kube-api-access-g8kf9" (OuterVolumeSpecName: "kube-api-access-g8kf9") pod "852002e2-e05a-46fb-aaa5-c499e4105235" (UID: "852002e2-e05a-46fb-aaa5-c499e4105235"). InnerVolumeSpecName "kube-api-access-g8kf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.864897 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "852002e2-e05a-46fb-aaa5-c499e4105235" (UID: "852002e2-e05a-46fb-aaa5-c499e4105235"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.905436 5029 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.905483 5029 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852002e2-e05a-46fb-aaa5-c499e4105235-utilities\") on node \"crc\" DevicePath \"\"" Mar 13 22:02:10 crc kubenswrapper[5029]: I0313 22:02:10.905494 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8kf9\" (UniqueName: \"kubernetes.io/projected/852002e2-e05a-46fb-aaa5-c499e4105235-kube-api-access-g8kf9\") on node \"crc\" DevicePath \"\"" Mar 13 22:02:11 crc kubenswrapper[5029]: I0313 22:02:11.652091 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28st2" event={"ID":"fa028723-a519-4f82-860c-4c149f3a4e4a","Type":"ContainerStarted","Data":"b15bb79d32ca00e4c24eece316ddcf1bf356de900cfd5f954d25b05b5354b108"} Mar 13 22:02:11 crc kubenswrapper[5029]: I0313 22:02:11.652120 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhqtn" Mar 13 22:02:11 crc kubenswrapper[5029]: I0313 22:02:11.698205 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhqtn"] Mar 13 22:02:11 crc kubenswrapper[5029]: I0313 22:02:11.706447 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bhqtn"] Mar 13 22:02:12 crc kubenswrapper[5029]: I0313 22:02:12.620803 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" path="/var/lib/kubelet/pods/852002e2-e05a-46fb-aaa5-c499e4105235/volumes" Mar 13 22:02:17 crc kubenswrapper[5029]: I0313 22:02:17.850155 5029 scope.go:117] "RemoveContainer" containerID="8475699eabf49a57c9ae962f39ed0a3ddfb8e96998bf0a13bb06f64a3797ad35" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.157728 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557324-tjk5v"] Mar 13 22:04:00 crc kubenswrapper[5029]: E0313 22:04:00.158659 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" containerName="extract-utilities" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.158676 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" containerName="extract-utilities" Mar 13 22:04:00 crc kubenswrapper[5029]: E0313 22:04:00.158726 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" containerName="extract-content" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.158734 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" containerName="extract-content" Mar 13 22:04:00 crc kubenswrapper[5029]: E0313 22:04:00.158745 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" containerName="registry-server" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.158754 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" containerName="registry-server" Mar 13 22:04:00 crc kubenswrapper[5029]: E0313 22:04:00.158765 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794c5eb7-f6fd-4bbc-824e-d795e4b254bc" containerName="oc" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.158773 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="794c5eb7-f6fd-4bbc-824e-d795e4b254bc" containerName="oc" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.159010 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="794c5eb7-f6fd-4bbc-824e-d795e4b254bc" containerName="oc" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.159037 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="852002e2-e05a-46fb-aaa5-c499e4105235" containerName="registry-server" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.159794 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557324-tjk5v" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.161600 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.162449 5029 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-55w9q" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.164760 5029 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.188272 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557324-tjk5v"] Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.213622 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz2ng\" (UniqueName: \"kubernetes.io/projected/ee810fe2-26a1-45c9-ba6e-face0e78f111-kube-api-access-lz2ng\") pod \"auto-csr-approver-29557324-tjk5v\" (UID: \"ee810fe2-26a1-45c9-ba6e-face0e78f111\") " pod="openshift-infra/auto-csr-approver-29557324-tjk5v" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.315676 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz2ng\" (UniqueName: \"kubernetes.io/projected/ee810fe2-26a1-45c9-ba6e-face0e78f111-kube-api-access-lz2ng\") pod \"auto-csr-approver-29557324-tjk5v\" (UID: \"ee810fe2-26a1-45c9-ba6e-face0e78f111\") " pod="openshift-infra/auto-csr-approver-29557324-tjk5v" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.343704 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz2ng\" (UniqueName: \"kubernetes.io/projected/ee810fe2-26a1-45c9-ba6e-face0e78f111-kube-api-access-lz2ng\") pod \"auto-csr-approver-29557324-tjk5v\" (UID: \"ee810fe2-26a1-45c9-ba6e-face0e78f111\") " pod="openshift-infra/auto-csr-approver-29557324-tjk5v" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.521402 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557324-tjk5v" Mar 13 22:04:00 crc kubenswrapper[5029]: I0313 22:04:00.998583 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557324-tjk5v"] Mar 13 22:04:01 crc kubenswrapper[5029]: W0313 22:04:01.002341 5029 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee810fe2_26a1_45c9_ba6e_face0e78f111.slice/crio-df5121dbd0f55840b04f33cd1ce19cac4da087ada3eaca1e440cee0ecb7ee471 WatchSource:0}: Error finding container df5121dbd0f55840b04f33cd1ce19cac4da087ada3eaca1e440cee0ecb7ee471: Status 404 returned error can't find the container with id df5121dbd0f55840b04f33cd1ce19cac4da087ada3eaca1e440cee0ecb7ee471 Mar 13 22:04:01 crc kubenswrapper[5029]: I0313 22:04:01.005877 5029 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 22:04:01 crc kubenswrapper[5029]: I0313 22:04:01.984589 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557324-tjk5v" event={"ID":"ee810fe2-26a1-45c9-ba6e-face0e78f111","Type":"ContainerStarted","Data":"df5121dbd0f55840b04f33cd1ce19cac4da087ada3eaca1e440cee0ecb7ee471"} Mar 13 22:04:02 crc kubenswrapper[5029]: I0313 22:04:02.996776 5029 generic.go:334] "Generic (PLEG): container finished" podID="ee810fe2-26a1-45c9-ba6e-face0e78f111" containerID="836c3efaffcd1fa6436f7d607a8a39d8e6e55381484062e6f48f121321f2a8a7" exitCode=0 Mar 13 22:04:02 crc kubenswrapper[5029]: I0313 22:04:02.996881 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557324-tjk5v" event={"ID":"ee810fe2-26a1-45c9-ba6e-face0e78f111","Type":"ContainerDied","Data":"836c3efaffcd1fa6436f7d607a8a39d8e6e55381484062e6f48f121321f2a8a7"} Mar 13 22:04:04 crc kubenswrapper[5029]: I0313 22:04:04.403675 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557324-tjk5v" Mar 13 22:04:04 crc kubenswrapper[5029]: I0313 22:04:04.506530 5029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz2ng\" (UniqueName: \"kubernetes.io/projected/ee810fe2-26a1-45c9-ba6e-face0e78f111-kube-api-access-lz2ng\") pod \"ee810fe2-26a1-45c9-ba6e-face0e78f111\" (UID: \"ee810fe2-26a1-45c9-ba6e-face0e78f111\") " Mar 13 22:04:04 crc kubenswrapper[5029]: I0313 22:04:04.515499 5029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee810fe2-26a1-45c9-ba6e-face0e78f111-kube-api-access-lz2ng" (OuterVolumeSpecName: "kube-api-access-lz2ng") pod "ee810fe2-26a1-45c9-ba6e-face0e78f111" (UID: "ee810fe2-26a1-45c9-ba6e-face0e78f111"). InnerVolumeSpecName "kube-api-access-lz2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 22:04:04 crc kubenswrapper[5029]: I0313 22:04:04.611399 5029 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz2ng\" (UniqueName: \"kubernetes.io/projected/ee810fe2-26a1-45c9-ba6e-face0e78f111-kube-api-access-lz2ng\") on node \"crc\" DevicePath \"\"" Mar 13 22:04:05 crc kubenswrapper[5029]: I0313 22:04:05.029088 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557324-tjk5v" event={"ID":"ee810fe2-26a1-45c9-ba6e-face0e78f111","Type":"ContainerDied","Data":"df5121dbd0f55840b04f33cd1ce19cac4da087ada3eaca1e440cee0ecb7ee471"} Mar 13 22:04:05 crc kubenswrapper[5029]: I0313 22:04:05.029187 5029 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557324-tjk5v" Mar 13 22:04:05 crc kubenswrapper[5029]: I0313 22:04:05.029236 5029 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df5121dbd0f55840b04f33cd1ce19cac4da087ada3eaca1e440cee0ecb7ee471" Mar 13 22:04:05 crc kubenswrapper[5029]: I0313 22:04:05.492340 5029 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557318-ddw78"] Mar 13 22:04:05 crc kubenswrapper[5029]: I0313 22:04:05.500802 5029 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557318-ddw78"] Mar 13 22:04:06 crc kubenswrapper[5029]: I0313 22:04:06.624994 5029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="546d8392-d924-412c-90b3-9a0aa9ad9ff1" path="/var/lib/kubelet/pods/546d8392-d924-412c-90b3-9a0aa9ad9ff1/volumes" Mar 13 22:04:17 crc kubenswrapper[5029]: I0313 22:04:17.978204 5029 scope.go:117] "RemoveContainer" containerID="3d6a1ab7af89b202ad2c85a4dfdf49709cd8cbbf0bf87f44ba250b791381a958" Mar 13 22:04:31 crc kubenswrapper[5029]: I0313 22:04:31.950423 5029 patch_prober.go:28] interesting pod/machine-config-daemon-28st2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 22:04:31 crc kubenswrapper[5029]: I0313 22:04:31.951063 5029 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28st2" podUID="fa028723-a519-4f82-860c-4c149f3a4e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.779038 5029 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tv575"] Mar 13 22:04:32 crc kubenswrapper[5029]: E0313 22:04:32.780099 5029 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee810fe2-26a1-45c9-ba6e-face0e78f111" containerName="oc" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.780133 5029 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee810fe2-26a1-45c9-ba6e-face0e78f111" containerName="oc" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.780499 5029 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee810fe2-26a1-45c9-ba6e-face0e78f111" containerName="oc" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.785255 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.805374 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tv575"] Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.867274 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8674dc3c-2140-44f6-953b-a217db89e8b6-catalog-content\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.867840 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-622cp\" (UniqueName: \"kubernetes.io/projected/8674dc3c-2140-44f6-953b-a217db89e8b6-kube-api-access-622cp\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.868038 5029 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8674dc3c-2140-44f6-953b-a217db89e8b6-utilities\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.971251 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-622cp\" (UniqueName: \"kubernetes.io/projected/8674dc3c-2140-44f6-953b-a217db89e8b6-kube-api-access-622cp\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.971370 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8674dc3c-2140-44f6-953b-a217db89e8b6-utilities\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.971550 5029 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8674dc3c-2140-44f6-953b-a217db89e8b6-catalog-content\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.972206 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8674dc3c-2140-44f6-953b-a217db89e8b6-utilities\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.972323 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8674dc3c-2140-44f6-953b-a217db89e8b6-catalog-content\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:32 crc kubenswrapper[5029]: I0313 22:04:32.997690 5029 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-622cp\" (UniqueName: \"kubernetes.io/projected/8674dc3c-2140-44f6-953b-a217db89e8b6-kube-api-access-622cp\") pod \"redhat-operators-tv575\" (UID: \"8674dc3c-2140-44f6-953b-a217db89e8b6\") " pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:33 crc kubenswrapper[5029]: I0313 22:04:33.121367 5029 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:33 crc kubenswrapper[5029]: I0313 22:04:33.614546 5029 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tv575"] Mar 13 22:04:34 crc kubenswrapper[5029]: I0313 22:04:34.380328 5029 generic.go:334] "Generic (PLEG): container finished" podID="8674dc3c-2140-44f6-953b-a217db89e8b6" containerID="675bb9f0dbd2335d5534764b0cc68c6fc046d0ef503d0e8c8f905c860575fe53" exitCode=0 Mar 13 22:04:34 crc kubenswrapper[5029]: I0313 22:04:34.380437 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tv575" event={"ID":"8674dc3c-2140-44f6-953b-a217db89e8b6","Type":"ContainerDied","Data":"675bb9f0dbd2335d5534764b0cc68c6fc046d0ef503d0e8c8f905c860575fe53"} Mar 13 22:04:34 crc kubenswrapper[5029]: I0313 22:04:34.380712 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tv575" event={"ID":"8674dc3c-2140-44f6-953b-a217db89e8b6","Type":"ContainerStarted","Data":"46c95ec73a79e570a40920dd429292451d06ede8c86da06a082f0d83dfae8d7f"} Mar 13 22:04:35 crc kubenswrapper[5029]: I0313 22:04:35.394865 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tv575" event={"ID":"8674dc3c-2140-44f6-953b-a217db89e8b6","Type":"ContainerStarted","Data":"11035d952f672a8832a308c7ace5a8e9ad3d3446808761078981013adc44ffbe"} Mar 13 22:04:38 crc kubenswrapper[5029]: I0313 22:04:38.433928 5029 generic.go:334] "Generic (PLEG): container finished" podID="8674dc3c-2140-44f6-953b-a217db89e8b6" containerID="11035d952f672a8832a308c7ace5a8e9ad3d3446808761078981013adc44ffbe" exitCode=0 Mar 13 22:04:38 crc kubenswrapper[5029]: I0313 22:04:38.434052 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tv575" event={"ID":"8674dc3c-2140-44f6-953b-a217db89e8b6","Type":"ContainerDied","Data":"11035d952f672a8832a308c7ace5a8e9ad3d3446808761078981013adc44ffbe"} Mar 13 22:04:39 crc kubenswrapper[5029]: I0313 22:04:39.447701 5029 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tv575" event={"ID":"8674dc3c-2140-44f6-953b-a217db89e8b6","Type":"ContainerStarted","Data":"cfb1c859fed91c8a802b79cbc641e113c3f3ef9abd6796eec896171e64ba6b79"} Mar 13 22:04:39 crc kubenswrapper[5029]: I0313 22:04:39.473740 5029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tv575" podStartSLOduration=3.009734376 podStartE2EDuration="7.473718378s" podCreationTimestamp="2026-03-13 22:04:32 +0000 UTC" firstStartedPulling="2026-03-13 22:04:34.383475506 +0000 UTC m=+5834.399557929" lastFinishedPulling="2026-03-13 22:04:38.847459538 +0000 UTC m=+5838.863541931" observedRunningTime="2026-03-13 22:04:39.470606803 +0000 UTC m=+5839.486689226" watchObservedRunningTime="2026-03-13 22:04:39.473718378 +0000 UTC m=+5839.489800791" Mar 13 22:04:43 crc kubenswrapper[5029]: I0313 22:04:43.122303 5029 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:43 crc kubenswrapper[5029]: I0313 22:04:43.155776 5029 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tv575" Mar 13 22:04:44 crc kubenswrapper[5029]: I0313 22:04:44.244952 5029 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tv575" podUID="8674dc3c-2140-44f6-953b-a217db89e8b6" containerName="registry-server" probeResult="failure" output=< Mar 13 22:04:44 crc kubenswrapper[5029]: timeout: failed to connect service ":50051" within 1s Mar 13 22:04:44 crc kubenswrapper[5029]: >